00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2027 00:00:00.001 originally caused by: 00:00:00.001 Started by user Latecki, Karol 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.059 using credential 00000000-0000-0000-0000-000000000002 00:00:00.061 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.098 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.122 Using shallow fetch with depth 1 00:00:00.122 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.122 > git --version # timeout=10 00:00:00.150 > git --version # 'git version 2.39.2' 00:00:00.150 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.169 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.169 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/6 # timeout=5 00:00:05.652 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.664 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.677 Checking out Revision e33ef006ccd688d2b66122cd0240b989d53c9017 (FETCH_HEAD) 00:00:05.677 > git config core.sparsecheckout # timeout=10 00:00:05.689 > git read-tree -mu HEAD # timeout=10 00:00:05.707 > git checkout -f e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=5 00:00:05.728 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:05.728 > git rev-list --no-walk e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=10 00:00:05.853 [Pipeline] Start of Pipeline 00:00:05.866 [Pipeline] library 00:00:05.867 Loading library shm_lib@master 00:00:05.868 Library shm_lib@master is cached. Copying from home. 00:00:05.886 [Pipeline] node 00:00:20.888 Still waiting to schedule task 00:00:20.889 Waiting for next available executor on ‘DiskNvme&&NetCVL’ 00:00:25.236 Running on CYP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:25.238 [Pipeline] { 00:00:25.250 [Pipeline] catchError 00:00:25.252 [Pipeline] { 00:00:25.267 [Pipeline] wrap 00:00:25.277 [Pipeline] { 00:00:25.292 [Pipeline] stage 00:00:25.295 [Pipeline] { (Prologue) 00:00:25.534 [Pipeline] sh 00:00:25.823 + logger -p user.info -t JENKINS-CI 00:00:25.843 [Pipeline] echo 00:00:25.844 Node: CYP6 00:00:25.853 [Pipeline] sh 00:00:26.158 [Pipeline] setCustomBuildProperty 00:00:26.174 [Pipeline] echo 00:00:26.176 Cleanup processes 00:00:26.183 [Pipeline] sh 00:00:26.472 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:26.472 1376384 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:26.487 [Pipeline] sh 00:00:26.775 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:26.775 ++ grep -v 'sudo pgrep' 00:00:26.775 ++ awk '{print $1}' 00:00:26.775 + sudo kill -9 00:00:26.775 + true 00:00:26.789 [Pipeline] cleanWs 00:00:26.799 [WS-CLEANUP] Deleting project workspace... 00:00:26.799 [WS-CLEANUP] Deferred wipeout is used... 00:00:26.807 [WS-CLEANUP] done 00:00:26.812 [Pipeline] setCustomBuildProperty 00:00:26.827 [Pipeline] sh 00:00:27.112 + sudo git config --global --replace-all safe.directory '*' 00:00:27.236 [Pipeline] httpRequest 00:00:27.274 [Pipeline] echo 00:00:27.275 Sorcerer 10.211.164.101 is alive 00:00:27.284 [Pipeline] httpRequest 00:00:27.289 HttpMethod: GET 00:00:27.290 URL: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:27.292 Sending request to url: http://10.211.164.101/packages/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:27.299 Response Code: HTTP/1.1 200 OK 00:00:27.300 Success: Status code 200 is in the accepted range: 200,404 00:00:27.300 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:35.191 [Pipeline] sh 00:00:35.480 + tar --no-same-owner -xf jbp_e33ef006ccd688d2b66122cd0240b989d53c9017.tar.gz 00:00:35.497 [Pipeline] httpRequest 00:00:35.518 [Pipeline] echo 00:00:35.520 Sorcerer 10.211.164.101 is alive 00:00:35.528 [Pipeline] httpRequest 00:00:35.533 HttpMethod: GET 00:00:35.533 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:35.535 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:35.552 Response Code: HTTP/1.1 200 OK 00:00:35.552 Success: Status code 200 is in the accepted range: 200,404 00:00:35.553 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:58.145 [Pipeline] sh 00:00:58.433 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:01.779 [Pipeline] sh 00:01:02.063 + git -C spdk log --oneline -n5 00:01:02.063 dbef7efac test: fix dpdk builds on ubuntu24 00:01:02.063 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:02.063 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:02.063 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:02.063 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:02.076 [Pipeline] } 00:01:02.092 [Pipeline] // stage 00:01:02.101 [Pipeline] stage 00:01:02.103 [Pipeline] { (Prepare) 00:01:02.120 [Pipeline] writeFile 00:01:02.137 [Pipeline] sh 00:01:02.424 + logger -p user.info -t JENKINS-CI 00:01:02.437 [Pipeline] sh 00:01:02.722 + logger -p user.info -t JENKINS-CI 00:01:02.735 [Pipeline] sh 00:01:03.020 + cat autorun-spdk.conf 00:01:03.020 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.020 SPDK_TEST_NVMF=1 00:01:03.020 SPDK_TEST_NVME_CLI=1 00:01:03.020 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.020 SPDK_TEST_NVMF_NICS=e810 00:01:03.020 SPDK_RUN_UBSAN=1 00:01:03.020 NET_TYPE=phy 00:01:03.028 RUN_NIGHTLY=1 00:01:03.033 [Pipeline] readFile 00:01:03.059 [Pipeline] withEnv 00:01:03.061 [Pipeline] { 00:01:03.075 [Pipeline] sh 00:01:03.358 + set -ex 00:01:03.358 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:03.358 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.358 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.358 ++ SPDK_TEST_NVMF=1 00:01:03.358 ++ SPDK_TEST_NVME_CLI=1 00:01:03.358 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.358 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.358 ++ SPDK_RUN_UBSAN=1 00:01:03.358 ++ NET_TYPE=phy 00:01:03.358 ++ RUN_NIGHTLY=1 00:01:03.358 + case $SPDK_TEST_NVMF_NICS in 00:01:03.358 + DRIVERS=ice 00:01:03.358 + [[ tcp == \r\d\m\a ]] 00:01:03.358 + [[ -n ice ]] 00:01:03.358 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:03.358 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:03.358 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:03.358 rmmod: ERROR: Module irdma is not currently loaded 00:01:03.358 rmmod: ERROR: Module i40iw is not currently loaded 00:01:03.358 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:03.358 + true 00:01:03.358 + for D in $DRIVERS 00:01:03.359 + sudo modprobe ice 00:01:03.359 + exit 0 00:01:03.369 [Pipeline] } 00:01:03.386 [Pipeline] // withEnv 00:01:03.392 [Pipeline] } 00:01:03.410 [Pipeline] // stage 00:01:03.419 [Pipeline] catchError 00:01:03.421 [Pipeline] { 00:01:03.439 [Pipeline] timeout 00:01:03.439 Timeout set to expire in 50 min 00:01:03.442 [Pipeline] { 00:01:03.457 [Pipeline] stage 00:01:03.458 [Pipeline] { (Tests) 00:01:03.472 [Pipeline] sh 00:01:03.761 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:03.761 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.761 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.761 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:03.761 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.761 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.761 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.761 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:03.761 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.761 + source /etc/os-release 00:01:03.761 ++ NAME='Fedora Linux' 00:01:03.761 ++ VERSION='38 (Cloud Edition)' 00:01:03.761 ++ ID=fedora 00:01:03.761 ++ VERSION_ID=38 00:01:03.761 ++ VERSION_CODENAME= 00:01:03.761 ++ PLATFORM_ID=platform:f38 00:01:03.761 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:03.761 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:03.761 ++ LOGO=fedora-logo-icon 00:01:03.761 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:03.761 ++ HOME_URL=https://fedoraproject.org/ 00:01:03.761 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:03.761 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:03.761 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:03.761 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:03.761 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:03.761 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:03.761 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:03.761 ++ SUPPORT_END=2024-05-14 00:01:03.761 ++ VARIANT='Cloud Edition' 00:01:03.761 ++ VARIANT_ID=cloud 00:01:03.761 + uname -a 00:01:03.761 Linux spdk-CYP-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:03.761 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:07.061 Hugepages 00:01:07.061 node hugesize free / total 00:01:07.061 node0 1048576kB 0 / 0 00:01:07.061 node0 2048kB 0 / 0 00:01:07.061 node1 1048576kB 0 / 0 00:01:07.061 node1 2048kB 0 / 0 00:01:07.061 00:01:07.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:07.061 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:07.061 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:07.322 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:07.322 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:07.322 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:07.322 + rm -f /tmp/spdk-ld-path 00:01:07.322 + source autorun-spdk.conf 00:01:07.322 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.322 ++ SPDK_TEST_NVMF=1 00:01:07.322 ++ SPDK_TEST_NVME_CLI=1 00:01:07.322 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.322 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.322 ++ SPDK_RUN_UBSAN=1 00:01:07.322 ++ NET_TYPE=phy 00:01:07.322 ++ RUN_NIGHTLY=1 00:01:07.322 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:07.322 + [[ -n '' ]] 00:01:07.322 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.322 + for M in /var/spdk/build-*-manifest.txt 00:01:07.322 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:07.323 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.323 + for M in /var/spdk/build-*-manifest.txt 00:01:07.323 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:07.323 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:07.323 ++ uname 00:01:07.323 + [[ Linux == \L\i\n\u\x ]] 00:01:07.323 + sudo dmesg -T 00:01:07.323 + sudo dmesg --clear 00:01:07.323 + dmesg_pid=1377469 00:01:07.323 + [[ Fedora Linux == FreeBSD ]] 00:01:07.323 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.323 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:07.323 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:07.323 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:07.323 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:07.323 + [[ -x /usr/src/fio-static/fio ]] 00:01:07.323 + export FIO_BIN=/usr/src/fio-static/fio 00:01:07.323 + sudo dmesg -Tw 00:01:07.323 + FIO_BIN=/usr/src/fio-static/fio 00:01:07.323 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:07.323 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:07.323 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:07.323 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.323 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:07.323 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:07.323 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.323 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:07.323 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.323 Test configuration: 00:01:07.323 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.323 SPDK_TEST_NVMF=1 00:01:07.323 SPDK_TEST_NVME_CLI=1 00:01:07.323 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.323 SPDK_TEST_NVMF_NICS=e810 00:01:07.323 SPDK_RUN_UBSAN=1 00:01:07.323 NET_TYPE=phy 00:01:07.323 RUN_NIGHTLY=1 17:38:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:07.323 17:38:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:07.323 17:38:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:07.323 17:38:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:07.323 17:38:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.323 17:38:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.323 17:38:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.323 17:38:11 -- paths/export.sh@5 -- $ export PATH 00:01:07.323 17:38:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:07.323 17:38:11 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:07.583 17:38:11 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:07.583 17:38:11 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721662691.XXXXXX 00:01:07.583 17:38:11 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721662691.7JKH5G 00:01:07.583 17:38:11 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:07.583 17:38:11 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:01:07.583 17:38:11 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:07.583 17:38:11 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:07.583 17:38:11 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:07.583 17:38:11 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:07.583 17:38:11 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:07.583 17:38:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.583 17:38:11 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:07.583 17:38:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.583 17:38:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.583 17:38:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.583 17:38:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.583 Mon Jul 22 03:38:11 PM UTC 2024 00:01:07.583 17:38:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.583 LTS-60-gdbef7efac 00:01:07.583 17:38:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.583 17:38:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.583 17:38:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.583 17:38:11 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:07.583 17:38:11 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:07.583 17:38:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.583 ************************************ 00:01:07.583 START TEST ubsan 00:01:07.583 ************************************ 00:01:07.583 17:38:11 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:07.583 using ubsan 00:01:07.583 00:01:07.583 real 0m0.001s 00:01:07.583 user 0m0.001s 00:01:07.583 sys 0m0.000s 00:01:07.583 17:38:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:07.583 17:38:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.583 ************************************ 00:01:07.583 END TEST ubsan 00:01:07.583 ************************************ 00:01:07.583 17:38:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.583 17:38:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.583 17:38:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.583 17:38:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:07.583 17:38:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:07.583 17:38:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:07.583 17:38:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:07.583 17:38:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:07.583 17:38:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:07.583 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:07.583 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:08.154 Using 'verbs' RDMA provider 00:01:23.633 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:38.534 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:38.534 Creating mk/config.mk...done. 00:01:38.534 Creating mk/cc.flags.mk...done. 00:01:38.534 Type 'make' to build. 00:01:38.534 17:38:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:01:38.535 17:38:40 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:38.535 17:38:40 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:38.535 17:38:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.535 ************************************ 00:01:38.535 START TEST make 00:01:38.535 ************************************ 00:01:38.535 17:38:40 -- common/autotest_common.sh@1104 -- $ make -j128 00:01:38.535 make[1]: Nothing to be done for 'all'. 00:01:45.160 The Meson build system 00:01:45.160 Version: 1.3.1 00:01:45.160 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:45.160 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:45.160 Build type: native build 00:01:45.160 Program cat found: YES (/usr/bin/cat) 00:01:45.160 Project name: DPDK 00:01:45.160 Project version: 23.11.0 00:01:45.160 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:45.160 C linker for the host machine: cc ld.bfd 2.39-16 00:01:45.160 Host machine cpu family: x86_64 00:01:45.160 Host machine cpu: x86_64 00:01:45.160 Message: ## Building in Developer Mode ## 00:01:45.160 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.160 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.160 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.160 Program python3 found: YES (/usr/bin/python3) 00:01:45.160 Program cat found: YES (/usr/bin/cat) 00:01:45.160 Compiler for C supports arguments -march=native: YES 00:01:45.160 Checking for size of "void *" : 8 00:01:45.160 Checking for size of "void *" : 8 (cached) 00:01:45.160 Library m found: YES 00:01:45.160 Library numa found: YES 00:01:45.160 Has header "numaif.h" : YES 00:01:45.160 Library fdt found: NO 00:01:45.160 Library execinfo found: NO 00:01:45.160 Has header "execinfo.h" : YES 00:01:45.160 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:45.160 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.160 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.160 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.160 Run-time dependency openssl found: YES 3.0.9 00:01:45.160 Run-time dependency libpcap found: YES 1.10.4 00:01:45.160 Has header "pcap.h" with dependency libpcap: YES 00:01:45.160 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.160 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.160 Compiler for C supports arguments -Wformat: YES 00:01:45.160 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:45.160 Compiler for C supports arguments -Wformat-security: NO 00:01:45.160 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.160 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.160 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.160 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.160 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.160 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.160 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.160 Compiler for C supports arguments -Wundef: YES 00:01:45.160 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.160 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.160 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.160 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.160 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.160 Program objdump found: YES (/usr/bin/objdump) 00:01:45.160 Compiler for C supports arguments -mavx512f: YES 00:01:45.160 Checking if "AVX512 checking" compiles: YES 00:01:45.160 Fetching value of define "__SSE4_2__" : 1 00:01:45.160 Fetching value of define "__AES__" : 1 00:01:45.160 Fetching value of define "__AVX__" : 1 00:01:45.160 Fetching value of define "__AVX2__" : 1 00:01:45.160 Fetching value of define "__AVX512BW__" : 1 00:01:45.160 Fetching value of define "__AVX512CD__" : 1 00:01:45.160 Fetching value of define "__AVX512DQ__" : 1 00:01:45.160 Fetching value of define "__AVX512F__" : 1 00:01:45.160 Fetching value of define "__AVX512VL__" : 1 00:01:45.160 Fetching value of define "__PCLMUL__" : 1 00:01:45.160 Fetching value of define "__RDRND__" : 1 00:01:45.160 Fetching value of define "__RDSEED__" : 1 00:01:45.160 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:45.160 Fetching value of define "__znver1__" : (undefined) 00:01:45.160 Fetching value of define "__znver2__" : (undefined) 00:01:45.160 Fetching value of define "__znver3__" : (undefined) 00:01:45.160 Fetching value of define "__znver4__" : (undefined) 00:01:45.160 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.160 Message: lib/log: Defining dependency "log" 00:01:45.160 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.160 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.160 Checking for function "getentropy" : NO 00:01:45.160 Message: lib/eal: Defining dependency "eal" 00:01:45.161 Message: lib/ring: Defining dependency "ring" 00:01:45.161 Message: lib/rcu: Defining dependency "rcu" 00:01:45.161 Message: lib/mempool: Defining dependency "mempool" 00:01:45.161 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.161 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.161 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:45.161 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:45.161 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:45.161 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:45.161 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:45.161 Compiler for C supports arguments -mpclmul: YES 00:01:45.161 Compiler for C supports arguments -maes: YES 00:01:45.161 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.161 Compiler for C supports arguments -mavx512bw: YES 00:01:45.161 Compiler for C supports arguments -mavx512dq: YES 00:01:45.161 Compiler for C supports arguments -mavx512vl: YES 00:01:45.161 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.161 Compiler for C supports arguments -mavx2: YES 00:01:45.161 Compiler for C supports arguments -mavx: YES 00:01:45.161 Message: lib/net: Defining dependency "net" 00:01:45.161 Message: lib/meter: Defining dependency "meter" 00:01:45.161 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.161 Message: lib/pci: Defining dependency "pci" 00:01:45.161 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.161 Message: lib/hash: Defining dependency "hash" 00:01:45.161 Message: lib/timer: Defining dependency "timer" 00:01:45.161 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.161 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.161 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.161 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.161 Message: lib/power: Defining dependency "power" 00:01:45.161 Message: lib/reorder: Defining dependency "reorder" 00:01:45.161 Message: lib/security: Defining dependency "security" 00:01:45.161 Has header "linux/userfaultfd.h" : YES 00:01:45.161 Has header "linux/vduse.h" : YES 00:01:45.161 Message: lib/vhost: Defining dependency "vhost" 00:01:45.161 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.161 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.161 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.161 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.161 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.161 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.161 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.161 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.161 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.161 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.161 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.161 Configuring doxy-api-html.conf using configuration 00:01:45.161 Configuring doxy-api-man.conf using configuration 00:01:45.161 Program mandb found: YES (/usr/bin/mandb) 00:01:45.161 Program sphinx-build found: NO 00:01:45.161 Configuring rte_build_config.h using configuration 00:01:45.161 Message: 00:01:45.161 ================= 00:01:45.161 Applications Enabled 00:01:45.161 ================= 00:01:45.161 00:01:45.161 apps: 00:01:45.161 00:01:45.161 00:01:45.161 Message: 00:01:45.161 ================= 00:01:45.161 Libraries Enabled 00:01:45.161 ================= 00:01:45.161 00:01:45.161 libs: 00:01:45.161 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.161 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.161 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.161 00:01:45.161 Message: 00:01:45.161 =============== 00:01:45.161 Drivers Enabled 00:01:45.161 =============== 00:01:45.161 00:01:45.161 common: 00:01:45.161 00:01:45.161 bus: 00:01:45.161 pci, vdev, 00:01:45.161 mempool: 00:01:45.161 ring, 00:01:45.161 dma: 00:01:45.161 00:01:45.161 net: 00:01:45.161 00:01:45.161 crypto: 00:01:45.161 00:01:45.161 compress: 00:01:45.161 00:01:45.161 vdpa: 00:01:45.161 00:01:45.161 00:01:45.161 Message: 00:01:45.161 ================= 00:01:45.161 Content Skipped 00:01:45.161 ================= 00:01:45.161 00:01:45.161 apps: 00:01:45.161 dumpcap: explicitly disabled via build config 00:01:45.161 graph: explicitly disabled via build config 00:01:45.161 pdump: explicitly disabled via build config 00:01:45.161 proc-info: explicitly disabled via build config 00:01:45.161 test-acl: explicitly disabled via build config 00:01:45.161 test-bbdev: explicitly disabled via build config 00:01:45.161 test-cmdline: explicitly disabled via build config 00:01:45.161 test-compress-perf: explicitly disabled via build config 00:01:45.161 test-crypto-perf: explicitly disabled via build config 00:01:45.161 test-dma-perf: explicitly disabled via build config 00:01:45.161 test-eventdev: explicitly disabled via build config 00:01:45.161 test-fib: explicitly disabled via build config 00:01:45.161 test-flow-perf: explicitly disabled via build config 00:01:45.161 test-gpudev: explicitly disabled via build config 00:01:45.161 test-mldev: explicitly disabled via build config 00:01:45.161 test-pipeline: explicitly disabled via build config 00:01:45.161 test-pmd: explicitly disabled via build config 00:01:45.161 test-regex: explicitly disabled via build config 00:01:45.161 test-sad: explicitly disabled via build config 00:01:45.161 test-security-perf: explicitly disabled via build config 00:01:45.161 00:01:45.161 libs: 00:01:45.161 metrics: explicitly disabled via build config 00:01:45.161 acl: explicitly disabled via build config 00:01:45.161 bbdev: explicitly disabled via build config 00:01:45.161 bitratestats: explicitly disabled via build config 00:01:45.161 bpf: explicitly disabled via build config 00:01:45.161 cfgfile: explicitly disabled via build config 00:01:45.161 distributor: explicitly disabled via build config 00:01:45.161 efd: explicitly disabled via build config 00:01:45.161 eventdev: explicitly disabled via build config 00:01:45.161 dispatcher: explicitly disabled via build config 00:01:45.161 gpudev: explicitly disabled via build config 00:01:45.161 gro: explicitly disabled via build config 00:01:45.161 gso: explicitly disabled via build config 00:01:45.161 ip_frag: explicitly disabled via build config 00:01:45.161 jobstats: explicitly disabled via build config 00:01:45.161 latencystats: explicitly disabled via build config 00:01:45.161 lpm: explicitly disabled via build config 00:01:45.161 member: explicitly disabled via build config 00:01:45.161 pcapng: explicitly disabled via build config 00:01:45.161 rawdev: explicitly disabled via build config 00:01:45.161 regexdev: explicitly disabled via build config 00:01:45.161 mldev: explicitly disabled via build config 00:01:45.161 rib: explicitly disabled via build config 00:01:45.161 sched: explicitly disabled via build config 00:01:45.161 stack: explicitly disabled via build config 00:01:45.161 ipsec: explicitly disabled via build config 00:01:45.161 pdcp: explicitly disabled via build config 00:01:45.161 fib: explicitly disabled via build config 00:01:45.161 port: explicitly disabled via build config 00:01:45.161 pdump: explicitly disabled via build config 00:01:45.161 table: explicitly disabled via build config 00:01:45.161 pipeline: explicitly disabled via build config 00:01:45.161 graph: explicitly disabled via build config 00:01:45.161 node: explicitly disabled via build config 00:01:45.161 00:01:45.161 drivers: 00:01:45.161 common/cpt: not in enabled drivers build config 00:01:45.161 common/dpaax: not in enabled drivers build config 00:01:45.161 common/iavf: not in enabled drivers build config 00:01:45.161 common/idpf: not in enabled drivers build config 00:01:45.161 common/mvep: not in enabled drivers build config 00:01:45.161 common/octeontx: not in enabled drivers build config 00:01:45.161 bus/auxiliary: not in enabled drivers build config 00:01:45.161 bus/cdx: not in enabled drivers build config 00:01:45.161 bus/dpaa: not in enabled drivers build config 00:01:45.161 bus/fslmc: not in enabled drivers build config 00:01:45.161 bus/ifpga: not in enabled drivers build config 00:01:45.161 bus/platform: not in enabled drivers build config 00:01:45.161 bus/vmbus: not in enabled drivers build config 00:01:45.161 common/cnxk: not in enabled drivers build config 00:01:45.161 common/mlx5: not in enabled drivers build config 00:01:45.161 common/nfp: not in enabled drivers build config 00:01:45.161 common/qat: not in enabled drivers build config 00:01:45.161 common/sfc_efx: not in enabled drivers build config 00:01:45.161 mempool/bucket: not in enabled drivers build config 00:01:45.161 mempool/cnxk: not in enabled drivers build config 00:01:45.161 mempool/dpaa: not in enabled drivers build config 00:01:45.161 mempool/dpaa2: not in enabled drivers build config 00:01:45.161 mempool/octeontx: not in enabled drivers build config 00:01:45.161 mempool/stack: not in enabled drivers build config 00:01:45.161 dma/cnxk: not in enabled drivers build config 00:01:45.161 dma/dpaa: not in enabled drivers build config 00:01:45.161 dma/dpaa2: not in enabled drivers build config 00:01:45.161 dma/hisilicon: not in enabled drivers build config 00:01:45.161 dma/idxd: not in enabled drivers build config 00:01:45.161 dma/ioat: not in enabled drivers build config 00:01:45.161 dma/skeleton: not in enabled drivers build config 00:01:45.161 net/af_packet: not in enabled drivers build config 00:01:45.161 net/af_xdp: not in enabled drivers build config 00:01:45.161 net/ark: not in enabled drivers build config 00:01:45.161 net/atlantic: not in enabled drivers build config 00:01:45.161 net/avp: not in enabled drivers build config 00:01:45.161 net/axgbe: not in enabled drivers build config 00:01:45.161 net/bnx2x: not in enabled drivers build config 00:01:45.161 net/bnxt: not in enabled drivers build config 00:01:45.161 net/bonding: not in enabled drivers build config 00:01:45.161 net/cnxk: not in enabled drivers build config 00:01:45.161 net/cpfl: not in enabled drivers build config 00:01:45.161 net/cxgbe: not in enabled drivers build config 00:01:45.161 net/dpaa: not in enabled drivers build config 00:01:45.161 net/dpaa2: not in enabled drivers build config 00:01:45.162 net/e1000: not in enabled drivers build config 00:01:45.162 net/ena: not in enabled drivers build config 00:01:45.162 net/enetc: not in enabled drivers build config 00:01:45.162 net/enetfec: not in enabled drivers build config 00:01:45.162 net/enic: not in enabled drivers build config 00:01:45.162 net/failsafe: not in enabled drivers build config 00:01:45.162 net/fm10k: not in enabled drivers build config 00:01:45.162 net/gve: not in enabled drivers build config 00:01:45.162 net/hinic: not in enabled drivers build config 00:01:45.162 net/hns3: not in enabled drivers build config 00:01:45.162 net/i40e: not in enabled drivers build config 00:01:45.162 net/iavf: not in enabled drivers build config 00:01:45.162 net/ice: not in enabled drivers build config 00:01:45.162 net/idpf: not in enabled drivers build config 00:01:45.162 net/igc: not in enabled drivers build config 00:01:45.162 net/ionic: not in enabled drivers build config 00:01:45.162 net/ipn3ke: not in enabled drivers build config 00:01:45.162 net/ixgbe: not in enabled drivers build config 00:01:45.162 net/mana: not in enabled drivers build config 00:01:45.162 net/memif: not in enabled drivers build config 00:01:45.162 net/mlx4: not in enabled drivers build config 00:01:45.162 net/mlx5: not in enabled drivers build config 00:01:45.162 net/mvneta: not in enabled drivers build config 00:01:45.162 net/mvpp2: not in enabled drivers build config 00:01:45.162 net/netvsc: not in enabled drivers build config 00:01:45.162 net/nfb: not in enabled drivers build config 00:01:45.162 net/nfp: not in enabled drivers build config 00:01:45.162 net/ngbe: not in enabled drivers build config 00:01:45.162 net/null: not in enabled drivers build config 00:01:45.162 net/octeontx: not in enabled drivers build config 00:01:45.162 net/octeon_ep: not in enabled drivers build config 00:01:45.162 net/pcap: not in enabled drivers build config 00:01:45.162 net/pfe: not in enabled drivers build config 00:01:45.162 net/qede: not in enabled drivers build config 00:01:45.162 net/ring: not in enabled drivers build config 00:01:45.162 net/sfc: not in enabled drivers build config 00:01:45.162 net/softnic: not in enabled drivers build config 00:01:45.162 net/tap: not in enabled drivers build config 00:01:45.162 net/thunderx: not in enabled drivers build config 00:01:45.162 net/txgbe: not in enabled drivers build config 00:01:45.162 net/vdev_netvsc: not in enabled drivers build config 00:01:45.162 net/vhost: not in enabled drivers build config 00:01:45.162 net/virtio: not in enabled drivers build config 00:01:45.162 net/vmxnet3: not in enabled drivers build config 00:01:45.162 raw/*: missing internal dependency, "rawdev" 00:01:45.162 crypto/armv8: not in enabled drivers build config 00:01:45.162 crypto/bcmfs: not in enabled drivers build config 00:01:45.162 crypto/caam_jr: not in enabled drivers build config 00:01:45.162 crypto/ccp: not in enabled drivers build config 00:01:45.162 crypto/cnxk: not in enabled drivers build config 00:01:45.162 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.162 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.162 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.162 crypto/mlx5: not in enabled drivers build config 00:01:45.162 crypto/mvsam: not in enabled drivers build config 00:01:45.162 crypto/nitrox: not in enabled drivers build config 00:01:45.162 crypto/null: not in enabled drivers build config 00:01:45.162 crypto/octeontx: not in enabled drivers build config 00:01:45.162 crypto/openssl: not in enabled drivers build config 00:01:45.162 crypto/scheduler: not in enabled drivers build config 00:01:45.162 crypto/uadk: not in enabled drivers build config 00:01:45.162 crypto/virtio: not in enabled drivers build config 00:01:45.162 compress/isal: not in enabled drivers build config 00:01:45.162 compress/mlx5: not in enabled drivers build config 00:01:45.162 compress/octeontx: not in enabled drivers build config 00:01:45.162 compress/zlib: not in enabled drivers build config 00:01:45.162 regex/*: missing internal dependency, "regexdev" 00:01:45.162 ml/*: missing internal dependency, "mldev" 00:01:45.162 vdpa/ifc: not in enabled drivers build config 00:01:45.162 vdpa/mlx5: not in enabled drivers build config 00:01:45.162 vdpa/nfp: not in enabled drivers build config 00:01:45.162 vdpa/sfc: not in enabled drivers build config 00:01:45.162 event/*: missing internal dependency, "eventdev" 00:01:45.162 baseband/*: missing internal dependency, "bbdev" 00:01:45.162 gpu/*: missing internal dependency, "gpudev" 00:01:45.162 00:01:45.162 00:01:45.423 Build targets in project: 84 00:01:45.423 00:01:45.423 DPDK 23.11.0 00:01:45.423 00:01:45.423 User defined options 00:01:45.423 buildtype : debug 00:01:45.423 default_library : shared 00:01:45.423 libdir : lib 00:01:45.423 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:45.423 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:45.423 c_link_args : 00:01:45.423 cpu_instruction_set: native 00:01:45.423 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:45.423 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:45.423 enable_docs : false 00:01:45.423 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:45.423 enable_kmods : false 00:01:45.423 tests : false 00:01:45.423 00:01:45.423 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:45.683 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:45.951 [1/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.951 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.951 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:45.952 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.952 [5/264] Linking static target lib/librte_kvargs.a 00:01:45.952 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:45.952 [7/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.952 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.952 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:45.952 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.952 [11/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:45.952 [12/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.952 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.952 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.952 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.952 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:45.952 [17/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:45.952 [18/264] Linking static target lib/librte_log.a 00:01:45.952 [19/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:45.952 [20/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:45.952 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.952 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.952 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:45.952 [24/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.952 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.952 [26/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:46.209 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:46.209 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:46.209 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:46.209 [30/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:46.209 [31/264] Linking static target lib/librte_pci.a 00:01:46.209 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:46.209 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:46.209 [34/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.209 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:46.209 [36/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.209 [37/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.209 [38/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:46.209 [39/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.209 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.209 [41/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.209 [42/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.209 [43/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.469 [44/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:46.469 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:46.469 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.469 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:46.469 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:46.469 [49/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.469 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.469 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:46.469 [52/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.469 [53/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:46.469 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.469 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:46.469 [56/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:46.469 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.469 [58/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:46.469 [59/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:46.469 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.469 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:46.469 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:46.469 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:46.469 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:46.469 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.469 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.469 [67/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:46.469 [68/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:46.469 [69/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:46.469 [70/264] Linking static target lib/librte_ring.a 00:01:46.469 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:46.469 [72/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.469 [73/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:46.469 [74/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.469 [75/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:46.469 [76/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:46.469 [77/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:46.469 [78/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.469 [79/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:46.469 [80/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:46.469 [81/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.469 [82/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:46.469 [83/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.469 [84/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:46.469 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:46.469 [86/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:46.469 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:46.469 [88/264] Linking static target lib/librte_timer.a 00:01:46.469 [89/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:46.469 [90/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:46.469 [91/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.469 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.469 [93/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:46.469 [94/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:46.469 [95/264] Linking static target lib/librte_meter.a 00:01:46.469 [96/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.469 [97/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:46.469 [98/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.469 [99/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.469 [100/264] Linking static target lib/librte_telemetry.a 00:01:46.469 [101/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.469 [102/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.469 [103/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.469 [104/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.469 [105/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.469 [106/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.469 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.469 [108/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.469 [109/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:46.469 [110/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.469 [111/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:46.469 [112/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:46.469 [113/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.469 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.729 [115/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:46.729 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.729 [117/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.729 [118/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.729 [119/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.729 [120/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.729 [121/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.729 [122/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.729 [123/264] Linking static target lib/librte_cmdline.a 00:01:46.729 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.729 [125/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:46.729 [126/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.729 [127/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.729 [128/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.729 [129/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:46.729 [130/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.729 [131/264] Linking static target lib/librte_dmadev.a 00:01:46.729 [132/264] Linking static target lib/librte_net.a 00:01:46.729 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:46.729 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.729 [135/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.729 [136/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.729 [137/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:46.729 [138/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.729 [139/264] Linking static target lib/librte_compressdev.a 00:01:46.729 [140/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:46.729 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.729 [142/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:46.729 [143/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.729 [144/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:46.729 [145/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.729 [146/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.729 [147/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.729 [148/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:46.729 [149/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:46.729 [150/264] Linking static target lib/librte_reorder.a 00:01:46.729 [151/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.729 [152/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.729 [153/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.729 [154/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:46.729 [155/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.729 [156/264] Linking target lib/librte_log.so.24.0 00:01:46.729 [157/264] Linking static target lib/librte_power.a 00:01:46.729 [158/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.729 [159/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.729 [160/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.729 [161/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.729 [162/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:46.729 [163/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:46.729 [164/264] Linking static target lib/librte_rcu.a 00:01:46.729 [165/264] Linking static target lib/librte_security.a 00:01:46.729 [166/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.729 [167/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.729 [168/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:46.729 [169/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.729 [170/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.729 [171/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:46.729 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:46.729 [173/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.729 [174/264] Linking static target lib/librte_mbuf.a 00:01:46.729 [175/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.729 [176/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.729 [177/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.729 [178/264] Linking static target drivers/librte_bus_vdev.a 00:01:46.729 [179/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:46.729 [180/264] Linking static target lib/librte_hash.a 00:01:46.729 [181/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:46.989 [182/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.989 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:46.989 [184/264] Linking target lib/librte_kvargs.so.24.0 00:01:46.989 [185/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:46.989 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:46.989 [187/264] Linking static target lib/librte_mempool.a 00:01:46.989 [188/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.989 [189/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.989 [190/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.989 [191/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.989 [192/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:46.989 [193/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.989 [194/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:46.989 [195/264] Linking static target drivers/librte_bus_pci.a 00:01:46.989 [196/264] Linking static target lib/librte_cryptodev.a 00:01:46.989 [197/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:46.989 [198/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.989 [199/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:46.989 [200/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:46.989 [201/264] Linking static target drivers/librte_mempool_ring.a 00:01:46.989 [202/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.989 [203/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.250 [204/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.250 [205/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.250 [206/264] Linking target lib/librte_telemetry.so.24.0 00:01:47.250 [207/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.250 [208/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.250 [209/264] Linking static target lib/librte_eal.a 00:01:47.250 [210/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:47.250 [211/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:47.250 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.510 [213/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.510 [214/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.511 [215/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:47.771 [216/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.771 [217/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.771 [218/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.771 [219/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.771 [220/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.032 [221/264] Linking static target lib/librte_ethdev.a 00:01:48.032 [222/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.032 [223/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.605 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:48.605 [225/264] Linking static target lib/librte_vhost.a 00:01:49.250 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.163 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.751 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.134 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.134 [230/264] Linking target lib/librte_eal.so.24.0 00:01:59.134 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:59.395 [232/264] Linking target lib/librte_ring.so.24.0 00:01:59.395 [233/264] Linking target lib/librte_pci.so.24.0 00:01:59.395 [234/264] Linking target lib/librte_timer.so.24.0 00:01:59.395 [235/264] Linking target lib/librte_meter.so.24.0 00:01:59.395 [236/264] Linking target lib/librte_dmadev.so.24.0 00:01:59.395 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:59.395 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:59.395 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:59.395 [240/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:59.395 [241/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:59.395 [242/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:59.395 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:59.395 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:59.395 [245/264] Linking target lib/librte_mempool.so.24.0 00:01:59.655 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:59.655 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:59.655 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:59.655 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:59.915 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:59.915 [251/264] Linking target lib/librte_reorder.so.24.0 00:01:59.915 [252/264] Linking target lib/librte_compressdev.so.24.0 00:01:59.915 [253/264] Linking target lib/librte_net.so.24.0 00:01:59.915 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:00.176 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:00.176 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:00.176 [257/264] Linking target lib/librte_cmdline.so.24.0 00:02:00.176 [258/264] Linking target lib/librte_security.so.24.0 00:02:00.176 [259/264] Linking target lib/librte_hash.so.24.0 00:02:00.176 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:00.176 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:00.437 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.437 [263/264] Linking target lib/librte_power.so.24.0 00:02:00.437 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:00.437 INFO: autodetecting backend as ninja 00:02:00.437 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:01.379 CC lib/ut_mock/mock.o 00:02:01.379 CC lib/log/log.o 00:02:01.379 CC lib/log/log_flags.o 00:02:01.379 CC lib/log/log_deprecated.o 00:02:01.379 CC lib/ut/ut.o 00:02:01.639 LIB libspdk_ut_mock.a 00:02:01.639 LIB libspdk_log.a 00:02:01.639 LIB libspdk_ut.a 00:02:01.639 SO libspdk_ut_mock.so.5.0 00:02:01.639 SO libspdk_log.so.6.1 00:02:01.639 SO libspdk_ut.so.1.0 00:02:01.639 SYMLINK libspdk_ut_mock.so 00:02:01.639 SYMLINK libspdk_log.so 00:02:01.639 SYMLINK libspdk_ut.so 00:02:01.900 CC lib/util/base64.o 00:02:01.900 CC lib/util/bit_array.o 00:02:01.900 CC lib/util/cpuset.o 00:02:01.900 CC lib/ioat/ioat.o 00:02:01.900 CC lib/util/crc16.o 00:02:01.900 CC lib/dma/dma.o 00:02:01.900 CXX lib/trace_parser/trace.o 00:02:01.900 CC lib/util/crc32.o 00:02:01.900 CC lib/util/crc32c.o 00:02:01.900 CC lib/util/crc32_ieee.o 00:02:01.900 CC lib/util/crc64.o 00:02:01.900 CC lib/util/dif.o 00:02:01.900 CC lib/util/fd.o 00:02:01.900 CC lib/util/file.o 00:02:01.900 CC lib/util/hexlify.o 00:02:01.900 CC lib/util/iov.o 00:02:01.900 CC lib/util/math.o 00:02:01.900 CC lib/util/pipe.o 00:02:01.900 CC lib/util/strerror_tls.o 00:02:01.900 CC lib/util/string.o 00:02:01.900 CC lib/util/uuid.o 00:02:01.900 CC lib/util/fd_group.o 00:02:01.900 CC lib/util/xor.o 00:02:01.900 CC lib/util/zipf.o 00:02:02.161 CC lib/vfio_user/host/vfio_user_pci.o 00:02:02.161 CC lib/vfio_user/host/vfio_user.o 00:02:02.161 LIB libspdk_dma.a 00:02:02.161 LIB libspdk_ioat.a 00:02:02.161 SO libspdk_dma.so.3.0 00:02:02.161 SO libspdk_ioat.so.6.0 00:02:02.161 SYMLINK libspdk_dma.so 00:02:02.423 SYMLINK libspdk_ioat.so 00:02:02.423 LIB libspdk_util.a 00:02:02.423 SO libspdk_util.so.8.0 00:02:02.423 LIB libspdk_vfio_user.a 00:02:02.423 SO libspdk_vfio_user.so.4.0 00:02:02.683 SYMLINK libspdk_vfio_user.so 00:02:02.683 SYMLINK libspdk_util.so 00:02:02.683 LIB libspdk_trace_parser.a 00:02:02.683 SO libspdk_trace_parser.so.4.0 00:02:02.942 CC lib/conf/conf.o 00:02:02.942 CC lib/json/json_parse.o 00:02:02.942 CC lib/vmd/vmd.o 00:02:02.942 CC lib/json/json_util.o 00:02:02.942 CC lib/vmd/led.o 00:02:02.942 CC lib/json/json_write.o 00:02:02.942 CC lib/rdma/common.o 00:02:02.942 CC lib/rdma/rdma_verbs.o 00:02:02.942 CC lib/idxd/idxd.o 00:02:02.942 CC lib/idxd/idxd_user.o 00:02:02.942 CC lib/env_dpdk/env.o 00:02:02.942 CC lib/env_dpdk/memory.o 00:02:02.942 CC lib/idxd/idxd_kernel.o 00:02:02.942 CC lib/env_dpdk/pci.o 00:02:02.942 CC lib/env_dpdk/init.o 00:02:02.942 CC lib/env_dpdk/threads.o 00:02:02.942 CC lib/env_dpdk/pci_ioat.o 00:02:02.942 CC lib/env_dpdk/pci_virtio.o 00:02:02.942 CC lib/env_dpdk/pci_vmd.o 00:02:02.942 CC lib/env_dpdk/pci_idxd.o 00:02:02.942 CC lib/env_dpdk/pci_event.o 00:02:02.942 SYMLINK libspdk_trace_parser.so 00:02:02.942 CC lib/env_dpdk/sigbus_handler.o 00:02:02.942 CC lib/env_dpdk/pci_dpdk.o 00:02:02.942 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.942 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:03.203 LIB libspdk_conf.a 00:02:03.203 SO libspdk_conf.so.5.0 00:02:03.203 LIB libspdk_rdma.a 00:02:03.203 LIB libspdk_json.a 00:02:03.203 SO libspdk_rdma.so.5.0 00:02:03.203 SO libspdk_json.so.5.1 00:02:03.203 SYMLINK libspdk_conf.so 00:02:03.203 SYMLINK libspdk_rdma.so 00:02:03.203 SYMLINK libspdk_json.so 00:02:03.464 LIB libspdk_idxd.a 00:02:03.464 SO libspdk_idxd.so.11.0 00:02:03.464 LIB libspdk_vmd.a 00:02:03.464 SO libspdk_vmd.so.5.0 00:02:03.464 SYMLINK libspdk_idxd.so 00:02:03.464 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.464 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.464 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.464 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.464 SYMLINK libspdk_vmd.so 00:02:03.725 LIB libspdk_jsonrpc.a 00:02:03.725 SO libspdk_jsonrpc.so.5.1 00:02:03.985 SYMLINK libspdk_jsonrpc.so 00:02:04.246 CC lib/rpc/rpc.o 00:02:04.246 LIB libspdk_env_dpdk.a 00:02:04.246 SO libspdk_env_dpdk.so.13.0 00:02:04.246 LIB libspdk_rpc.a 00:02:04.507 SO libspdk_rpc.so.5.0 00:02:04.507 SYMLINK libspdk_rpc.so 00:02:04.507 SYMLINK libspdk_env_dpdk.so 00:02:04.768 CC lib/trace/trace.o 00:02:04.768 CC lib/notify/notify.o 00:02:04.768 CC lib/trace/trace_flags.o 00:02:04.768 CC lib/sock/sock.o 00:02:04.768 CC lib/notify/notify_rpc.o 00:02:04.768 CC lib/trace/trace_rpc.o 00:02:04.768 CC lib/sock/sock_rpc.o 00:02:04.768 LIB libspdk_notify.a 00:02:04.768 LIB libspdk_trace.a 00:02:04.768 SO libspdk_notify.so.5.0 00:02:05.028 SO libspdk_trace.so.9.0 00:02:05.028 SYMLINK libspdk_trace.so 00:02:05.028 SYMLINK libspdk_notify.so 00:02:05.028 LIB libspdk_sock.a 00:02:05.028 SO libspdk_sock.so.8.0 00:02:05.028 SYMLINK libspdk_sock.so 00:02:05.288 CC lib/thread/thread.o 00:02:05.288 CC lib/thread/iobuf.o 00:02:05.288 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.288 CC lib/nvme/nvme_ctrlr.o 00:02:05.288 CC lib/nvme/nvme_fabric.o 00:02:05.288 CC lib/nvme/nvme_ns_cmd.o 00:02:05.288 CC lib/nvme/nvme_ns.o 00:02:05.288 CC lib/nvme/nvme_pcie_common.o 00:02:05.288 CC lib/nvme/nvme_pcie.o 00:02:05.289 CC lib/nvme/nvme_qpair.o 00:02:05.289 CC lib/nvme/nvme.o 00:02:05.289 CC lib/nvme/nvme_quirks.o 00:02:05.289 CC lib/nvme/nvme_transport.o 00:02:05.289 CC lib/nvme/nvme_discovery.o 00:02:05.289 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.289 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.289 CC lib/nvme/nvme_tcp.o 00:02:05.289 CC lib/nvme/nvme_opal.o 00:02:05.289 CC lib/nvme/nvme_io_msg.o 00:02:05.289 CC lib/nvme/nvme_poll_group.o 00:02:05.289 CC lib/nvme/nvme_zns.o 00:02:05.289 CC lib/nvme/nvme_cuse.o 00:02:05.289 CC lib/nvme/nvme_vfio_user.o 00:02:05.289 CC lib/nvme/nvme_rdma.o 00:02:06.674 LIB libspdk_thread.a 00:02:06.674 SO libspdk_thread.so.9.0 00:02:06.674 SYMLINK libspdk_thread.so 00:02:06.934 CC lib/init/subsystem.o 00:02:06.934 CC lib/init/json_config.o 00:02:06.934 CC lib/init/subsystem_rpc.o 00:02:06.934 CC lib/init/rpc.o 00:02:06.934 CC lib/accel/accel.o 00:02:06.934 CC lib/accel/accel_rpc.o 00:02:06.934 CC lib/blob/blobstore.o 00:02:06.934 CC lib/accel/accel_sw.o 00:02:06.934 CC lib/blob/request.o 00:02:06.934 CC lib/blob/zeroes.o 00:02:06.934 CC lib/blob/blob_bs_dev.o 00:02:06.934 CC lib/virtio/virtio.o 00:02:06.934 CC lib/virtio/virtio_vhost_user.o 00:02:06.934 CC lib/virtio/virtio_vfio_user.o 00:02:06.934 CC lib/virtio/virtio_pci.o 00:02:07.195 LIB libspdk_nvme.a 00:02:07.195 LIB libspdk_virtio.a 00:02:07.195 SO libspdk_nvme.so.12.0 00:02:07.195 SO libspdk_virtio.so.6.0 00:02:07.195 LIB libspdk_init.a 00:02:07.455 SYMLINK libspdk_virtio.so 00:02:07.456 SO libspdk_init.so.4.0 00:02:07.456 SYMLINK libspdk_init.so 00:02:07.456 SYMLINK libspdk_nvme.so 00:02:07.717 CC lib/event/app.o 00:02:07.717 CC lib/event/reactor.o 00:02:07.717 CC lib/event/log_rpc.o 00:02:07.717 CC lib/event/app_rpc.o 00:02:07.717 CC lib/event/scheduler_static.o 00:02:07.717 LIB libspdk_accel.a 00:02:07.717 SO libspdk_accel.so.14.0 00:02:07.717 SYMLINK libspdk_accel.so 00:02:07.978 LIB libspdk_event.a 00:02:07.978 SO libspdk_event.so.12.0 00:02:07.978 CC lib/bdev/bdev.o 00:02:07.978 CC lib/bdev/bdev_rpc.o 00:02:07.978 CC lib/bdev/bdev_zone.o 00:02:07.978 CC lib/bdev/part.o 00:02:07.978 CC lib/bdev/scsi_nvme.o 00:02:08.239 SYMLINK libspdk_event.so 00:02:09.623 LIB libspdk_blob.a 00:02:09.623 SO libspdk_blob.so.10.1 00:02:09.883 SYMLINK libspdk_blob.so 00:02:10.144 CC lib/blobfs/blobfs.o 00:02:10.144 CC lib/blobfs/tree.o 00:02:10.144 CC lib/lvol/lvol.o 00:02:10.144 LIB libspdk_bdev.a 00:02:10.144 SO libspdk_bdev.so.14.0 00:02:10.405 SYMLINK libspdk_bdev.so 00:02:10.405 CC lib/nbd/nbd.o 00:02:10.405 CC lib/nbd/nbd_rpc.o 00:02:10.405 CC lib/ublk/ublk.o 00:02:10.405 CC lib/ftl/ftl_core.o 00:02:10.405 CC lib/nvmf/ctrlr.o 00:02:10.405 CC lib/nvmf/ctrlr_bdev.o 00:02:10.405 CC lib/ublk/ublk_rpc.o 00:02:10.405 CC lib/scsi/dev.o 00:02:10.405 CC lib/ftl/ftl_init.o 00:02:10.405 CC lib/nvmf/ctrlr_discovery.o 00:02:10.405 CC lib/ftl/ftl_layout.o 00:02:10.405 CC lib/scsi/lun.o 00:02:10.664 CC lib/ftl/ftl_debug.o 00:02:10.664 CC lib/nvmf/subsystem.o 00:02:10.664 CC lib/nvmf/nvmf_rpc.o 00:02:10.664 CC lib/scsi/port.o 00:02:10.664 CC lib/ftl/ftl_io.o 00:02:10.664 CC lib/nvmf/nvmf.o 00:02:10.664 CC lib/scsi/scsi.o 00:02:10.664 CC lib/ftl/ftl_sb.o 00:02:10.664 CC lib/nvmf/transport.o 00:02:10.664 CC lib/scsi/scsi_bdev.o 00:02:10.664 CC lib/ftl/ftl_l2p.o 00:02:10.664 CC lib/nvmf/tcp.o 00:02:10.664 CC lib/scsi/scsi_pr.o 00:02:10.664 CC lib/ftl/ftl_l2p_flat.o 00:02:10.664 CC lib/nvmf/rdma.o 00:02:10.664 CC lib/scsi/scsi_rpc.o 00:02:10.664 CC lib/ftl/ftl_nv_cache.o 00:02:10.664 CC lib/ftl/ftl_band.o 00:02:10.664 CC lib/scsi/task.o 00:02:10.664 CC lib/ftl/ftl_band_ops.o 00:02:10.664 CC lib/ftl/ftl_writer.o 00:02:10.664 CC lib/ftl/ftl_rq.o 00:02:10.664 CC lib/ftl/ftl_reloc.o 00:02:10.664 CC lib/ftl/ftl_l2p_cache.o 00:02:10.664 CC lib/ftl/ftl_p2l.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.664 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.664 CC lib/ftl/utils/ftl_conf.o 00:02:10.664 CC lib/ftl/utils/ftl_md.o 00:02:10.664 CC lib/ftl/utils/ftl_mempool.o 00:02:10.664 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.664 CC lib/ftl/utils/ftl_property.o 00:02:10.664 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.664 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.664 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.664 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.664 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.664 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.664 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.664 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.664 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.664 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.664 CC lib/ftl/base/ftl_base_bdev.o 00:02:10.664 CC lib/ftl/base/ftl_base_dev.o 00:02:10.664 CC lib/ftl/ftl_trace.o 00:02:10.664 LIB libspdk_blobfs.a 00:02:10.664 LIB libspdk_lvol.a 00:02:10.924 SO libspdk_blobfs.so.9.0 00:02:10.924 SO libspdk_lvol.so.9.1 00:02:10.924 SYMLINK libspdk_blobfs.so 00:02:10.924 SYMLINK libspdk_lvol.so 00:02:11.185 LIB libspdk_nbd.a 00:02:11.185 SO libspdk_nbd.so.6.0 00:02:11.185 SYMLINK libspdk_nbd.so 00:02:11.185 LIB libspdk_scsi.a 00:02:11.185 SO libspdk_scsi.so.8.0 00:02:11.185 LIB libspdk_ublk.a 00:02:11.444 SO libspdk_ublk.so.2.0 00:02:11.444 SYMLINK libspdk_scsi.so 00:02:11.444 SYMLINK libspdk_ublk.so 00:02:11.704 CC lib/iscsi/conn.o 00:02:11.704 CC lib/iscsi/init_grp.o 00:02:11.704 CC lib/iscsi/iscsi.o 00:02:11.704 CC lib/iscsi/md5.o 00:02:11.704 CC lib/iscsi/param.o 00:02:11.704 CC lib/vhost/vhost.o 00:02:11.704 CC lib/iscsi/portal_grp.o 00:02:11.704 CC lib/iscsi/tgt_node.o 00:02:11.704 CC lib/vhost/vhost_rpc.o 00:02:11.704 CC lib/iscsi/iscsi_subsystem.o 00:02:11.704 CC lib/vhost/vhost_scsi.o 00:02:11.704 CC lib/vhost/vhost_blk.o 00:02:11.704 CC lib/iscsi/iscsi_rpc.o 00:02:11.704 CC lib/vhost/rte_vhost_user.o 00:02:11.704 CC lib/iscsi/task.o 00:02:12.274 LIB libspdk_nvmf.a 00:02:12.535 SO libspdk_nvmf.so.17.0 00:02:12.535 LIB libspdk_vhost.a 00:02:12.535 SO libspdk_vhost.so.7.1 00:02:12.535 SYMLINK libspdk_nvmf.so 00:02:12.535 SYMLINK libspdk_vhost.so 00:02:12.795 LIB libspdk_ftl.a 00:02:12.795 LIB libspdk_iscsi.a 00:02:12.795 SO libspdk_ftl.so.8.0 00:02:12.795 SO libspdk_iscsi.so.7.0 00:02:13.055 SYMLINK libspdk_iscsi.so 00:02:13.055 SYMLINK libspdk_ftl.so 00:02:13.627 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.627 CC module/sock/posix/posix.o 00:02:13.627 CC module/blob/bdev/blob_bdev.o 00:02:13.627 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.627 CC module/accel/dsa/accel_dsa.o 00:02:13.627 CC module/accel/error/accel_error.o 00:02:13.627 CC module/accel/error/accel_error_rpc.o 00:02:13.627 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.627 CC module/accel/ioat/accel_ioat.o 00:02:13.627 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.627 CC module/accel/iaa/accel_iaa.o 00:02:13.627 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.627 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.627 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.627 LIB libspdk_env_dpdk_rpc.a 00:02:13.627 SO libspdk_env_dpdk_rpc.so.5.0 00:02:13.888 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.888 LIB libspdk_accel_error.a 00:02:13.888 LIB libspdk_scheduler_gscheduler.a 00:02:13.888 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.888 LIB libspdk_scheduler_dynamic.a 00:02:13.888 LIB libspdk_accel_iaa.a 00:02:13.888 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:13.888 SO libspdk_scheduler_gscheduler.so.3.0 00:02:13.888 SO libspdk_accel_error.so.1.0 00:02:13.888 SO libspdk_scheduler_dynamic.so.3.0 00:02:13.888 LIB libspdk_accel_dsa.a 00:02:13.888 SO libspdk_accel_iaa.so.2.0 00:02:13.888 LIB libspdk_blob_bdev.a 00:02:13.888 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.888 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.888 SO libspdk_accel_dsa.so.4.0 00:02:13.888 SO libspdk_blob_bdev.so.10.1 00:02:13.888 SYMLINK libspdk_scheduler_dynamic.so 00:02:13.888 SYMLINK libspdk_accel_iaa.so 00:02:13.888 LIB libspdk_accel_ioat.a 00:02:13.888 SO libspdk_accel_ioat.so.5.0 00:02:13.888 SYMLINK libspdk_accel_error.so 00:02:13.888 SYMLINK libspdk_blob_bdev.so 00:02:13.888 SYMLINK libspdk_accel_dsa.so 00:02:14.148 SYMLINK libspdk_accel_ioat.so 00:02:14.148 LIB libspdk_sock_posix.a 00:02:14.409 SO libspdk_sock_posix.so.5.0 00:02:14.409 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.409 CC module/bdev/error/vbdev_error.o 00:02:14.409 CC module/bdev/gpt/gpt.o 00:02:14.409 SYMLINK libspdk_sock_posix.so 00:02:14.409 CC module/bdev/split/vbdev_split.o 00:02:14.409 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.409 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.409 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.409 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.409 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.409 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.409 CC module/bdev/nvme/bdev_nvme.o 00:02:14.409 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.409 CC module/bdev/nvme/nvme_rpc.o 00:02:14.409 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.409 CC module/bdev/delay/vbdev_delay.o 00:02:14.409 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.409 CC module/bdev/nvme/vbdev_opal.o 00:02:14.409 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.409 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.409 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.409 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.409 CC module/bdev/malloc/bdev_malloc.o 00:02:14.409 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.409 CC module/bdev/null/bdev_null.o 00:02:14.409 CC module/bdev/ftl/bdev_ftl.o 00:02:14.409 CC module/bdev/raid/bdev_raid.o 00:02:14.409 CC module/bdev/aio/bdev_aio.o 00:02:14.409 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.409 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.409 CC module/bdev/null/bdev_null_rpc.o 00:02:14.409 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.409 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.409 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.409 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.409 CC module/bdev/raid/raid0.o 00:02:14.409 CC module/bdev/raid/raid1.o 00:02:14.409 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.409 CC module/bdev/raid/concat.o 00:02:14.409 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.410 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.410 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.410 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.670 LIB libspdk_bdev_split.a 00:02:14.670 LIB libspdk_bdev_error.a 00:02:14.670 SO libspdk_bdev_split.so.5.0 00:02:14.670 LIB libspdk_bdev_gpt.a 00:02:14.670 SO libspdk_bdev_error.so.5.0 00:02:14.670 LIB libspdk_bdev_null.a 00:02:14.670 SO libspdk_bdev_gpt.so.5.0 00:02:14.670 SO libspdk_bdev_null.so.5.0 00:02:14.670 SYMLINK libspdk_bdev_split.so 00:02:14.670 LIB libspdk_bdev_ftl.a 00:02:14.670 LIB libspdk_bdev_passthru.a 00:02:14.670 SYMLINK libspdk_bdev_error.so 00:02:14.931 LIB libspdk_bdev_malloc.a 00:02:14.931 SO libspdk_bdev_passthru.so.5.0 00:02:14.931 SYMLINK libspdk_bdev_gpt.so 00:02:14.931 LIB libspdk_bdev_iscsi.a 00:02:14.931 LIB libspdk_bdev_zone_block.a 00:02:14.931 SO libspdk_bdev_ftl.so.5.0 00:02:14.931 LIB libspdk_blobfs_bdev.a 00:02:14.931 LIB libspdk_bdev_aio.a 00:02:14.931 SYMLINK libspdk_bdev_null.so 00:02:14.931 LIB libspdk_bdev_delay.a 00:02:14.931 SO libspdk_bdev_zone_block.so.5.0 00:02:14.931 SO libspdk_bdev_malloc.so.5.0 00:02:14.931 SO libspdk_bdev_iscsi.so.5.0 00:02:14.931 SO libspdk_blobfs_bdev.so.5.0 00:02:14.931 SO libspdk_bdev_delay.so.5.0 00:02:14.931 LIB libspdk_bdev_lvol.a 00:02:14.931 SO libspdk_bdev_aio.so.5.0 00:02:14.931 SYMLINK libspdk_bdev_passthru.so 00:02:14.931 SYMLINK libspdk_bdev_ftl.so 00:02:14.931 SYMLINK libspdk_bdev_zone_block.so 00:02:14.931 SO libspdk_bdev_lvol.so.5.0 00:02:14.931 SYMLINK libspdk_bdev_malloc.so 00:02:14.931 SYMLINK libspdk_bdev_iscsi.so 00:02:14.931 LIB libspdk_bdev_virtio.a 00:02:14.931 SYMLINK libspdk_blobfs_bdev.so 00:02:14.931 SYMLINK libspdk_bdev_aio.so 00:02:14.931 SYMLINK libspdk_bdev_delay.so 00:02:14.931 SO libspdk_bdev_virtio.so.5.0 00:02:14.931 SYMLINK libspdk_bdev_lvol.so 00:02:15.206 SYMLINK libspdk_bdev_virtio.so 00:02:15.206 LIB libspdk_bdev_raid.a 00:02:15.206 SO libspdk_bdev_raid.so.5.0 00:02:15.506 SYMLINK libspdk_bdev_raid.so 00:02:16.079 LIB libspdk_bdev_nvme.a 00:02:16.340 SO libspdk_bdev_nvme.so.6.0 00:02:16.340 SYMLINK libspdk_bdev_nvme.so 00:02:16.912 CC module/event/subsystems/iobuf/iobuf.o 00:02:16.912 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:16.912 CC module/event/subsystems/vmd/vmd.o 00:02:16.912 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:16.912 CC module/event/subsystems/sock/sock.o 00:02:16.912 CC module/event/subsystems/scheduler/scheduler.o 00:02:16.912 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:16.912 LIB libspdk_event_sock.a 00:02:16.912 LIB libspdk_event_vmd.a 00:02:16.912 LIB libspdk_event_vhost_blk.a 00:02:16.912 LIB libspdk_event_scheduler.a 00:02:16.912 SO libspdk_event_sock.so.4.0 00:02:16.912 SO libspdk_event_vmd.so.5.0 00:02:16.912 SO libspdk_event_vhost_blk.so.2.0 00:02:16.912 SO libspdk_event_scheduler.so.3.0 00:02:17.172 SYMLINK libspdk_event_sock.so 00:02:17.172 SYMLINK libspdk_event_vmd.so 00:02:17.172 SYMLINK libspdk_event_vhost_blk.so 00:02:17.172 SYMLINK libspdk_event_scheduler.so 00:02:17.172 LIB libspdk_event_iobuf.a 00:02:17.172 SO libspdk_event_iobuf.so.2.0 00:02:17.432 SYMLINK libspdk_event_iobuf.so 00:02:17.432 CC module/event/subsystems/accel/accel.o 00:02:17.694 LIB libspdk_event_accel.a 00:02:17.694 SO libspdk_event_accel.so.5.0 00:02:17.955 SYMLINK libspdk_event_accel.so 00:02:17.955 CC module/event/subsystems/bdev/bdev.o 00:02:18.215 LIB libspdk_event_bdev.a 00:02:18.215 SO libspdk_event_bdev.so.5.0 00:02:18.215 SYMLINK libspdk_event_bdev.so 00:02:18.477 CC module/event/subsystems/ublk/ublk.o 00:02:18.477 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.477 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.477 CC module/event/subsystems/scsi/scsi.o 00:02:18.477 CC module/event/subsystems/nbd/nbd.o 00:02:18.737 LIB libspdk_event_scsi.a 00:02:18.737 LIB libspdk_event_ublk.a 00:02:18.737 LIB libspdk_event_nbd.a 00:02:18.737 SO libspdk_event_ublk.so.2.0 00:02:18.737 SO libspdk_event_scsi.so.5.0 00:02:18.737 SO libspdk_event_nbd.so.5.0 00:02:18.737 SYMLINK libspdk_event_scsi.so 00:02:18.737 SYMLINK libspdk_event_ublk.so 00:02:18.737 SYMLINK libspdk_event_nbd.so 00:02:18.999 LIB libspdk_event_nvmf.a 00:02:18.999 CC module/event/subsystems/iscsi/iscsi.o 00:02:18.999 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.999 SO libspdk_event_nvmf.so.5.0 00:02:19.260 SYMLINK libspdk_event_nvmf.so 00:02:19.260 LIB libspdk_event_vhost_scsi.a 00:02:19.260 LIB libspdk_event_iscsi.a 00:02:19.260 SO libspdk_event_vhost_scsi.so.2.0 00:02:19.260 SO libspdk_event_iscsi.so.5.0 00:02:19.260 SYMLINK libspdk_event_iscsi.so 00:02:19.521 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.521 SO libspdk.so.5.0 00:02:19.521 SYMLINK libspdk.so 00:02:19.780 TEST_HEADER include/spdk/accel_module.h 00:02:19.780 TEST_HEADER include/spdk/accel.h 00:02:19.780 TEST_HEADER include/spdk/assert.h 00:02:19.780 TEST_HEADER include/spdk/barrier.h 00:02:19.780 TEST_HEADER include/spdk/base64.h 00:02:19.780 TEST_HEADER include/spdk/bdev.h 00:02:19.780 TEST_HEADER include/spdk/bdev_module.h 00:02:19.780 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.780 TEST_HEADER include/spdk/bit_array.h 00:02:19.780 TEST_HEADER include/spdk/bit_pool.h 00:02:19.780 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.780 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.780 TEST_HEADER include/spdk/blobfs.h 00:02:19.780 TEST_HEADER include/spdk/blob.h 00:02:19.780 CC test/rpc_client/rpc_client_test.o 00:02:19.780 CXX app/trace/trace.o 00:02:19.781 TEST_HEADER include/spdk/conf.h 00:02:19.781 TEST_HEADER include/spdk/config.h 00:02:19.781 TEST_HEADER include/spdk/cpuset.h 00:02:19.781 TEST_HEADER include/spdk/crc16.h 00:02:19.781 CC app/spdk_lspci/spdk_lspci.o 00:02:19.781 CC app/trace_record/trace_record.o 00:02:19.781 TEST_HEADER include/spdk/crc32.h 00:02:19.781 TEST_HEADER include/spdk/crc64.h 00:02:19.781 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.781 TEST_HEADER include/spdk/dif.h 00:02:19.781 TEST_HEADER include/spdk/dma.h 00:02:19.781 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.781 CC app/spdk_top/spdk_top.o 00:02:19.781 TEST_HEADER include/spdk/endian.h 00:02:19.781 CC app/spdk_nvme_perf/perf.o 00:02:19.781 TEST_HEADER include/spdk/fd_group.h 00:02:19.781 TEST_HEADER include/spdk/env.h 00:02:19.781 TEST_HEADER include/spdk/event.h 00:02:19.781 TEST_HEADER include/spdk/file.h 00:02:19.781 TEST_HEADER include/spdk/ftl.h 00:02:19.781 TEST_HEADER include/spdk/fd.h 00:02:19.781 TEST_HEADER include/spdk/hexlify.h 00:02:19.781 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.781 TEST_HEADER include/spdk/histogram_data.h 00:02:19.781 TEST_HEADER include/spdk/idxd.h 00:02:19.781 CC app/spdk_nvme_identify/identify.o 00:02:19.781 TEST_HEADER include/spdk/ioat.h 00:02:19.781 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.781 TEST_HEADER include/spdk/init.h 00:02:19.781 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.781 TEST_HEADER include/spdk/json.h 00:02:19.781 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.781 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.781 TEST_HEADER include/spdk/likely.h 00:02:19.781 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.781 TEST_HEADER include/spdk/log.h 00:02:19.781 TEST_HEADER include/spdk/lvol.h 00:02:19.781 TEST_HEADER include/spdk/mmio.h 00:02:19.781 TEST_HEADER include/spdk/memory.h 00:02:19.781 TEST_HEADER include/spdk/notify.h 00:02:19.781 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.781 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.781 TEST_HEADER include/spdk/nbd.h 00:02:19.781 TEST_HEADER include/spdk/nvme.h 00:02:19.781 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.053 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.053 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.053 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.053 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.053 TEST_HEADER include/spdk/nvmf.h 00:02:20.053 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.053 CC app/spdk_dd/spdk_dd.o 00:02:20.053 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.053 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.053 CC app/spdk_tgt/spdk_tgt.o 00:02:20.053 CC app/vhost/vhost.o 00:02:20.053 TEST_HEADER include/spdk/opal.h 00:02:20.053 TEST_HEADER include/spdk/opal_spec.h 00:02:20.053 TEST_HEADER include/spdk/pci_ids.h 00:02:20.053 TEST_HEADER include/spdk/pipe.h 00:02:20.053 TEST_HEADER include/spdk/queue.h 00:02:20.053 TEST_HEADER include/spdk/reduce.h 00:02:20.053 TEST_HEADER include/spdk/scheduler.h 00:02:20.053 TEST_HEADER include/spdk/scsi.h 00:02:20.053 TEST_HEADER include/spdk/rpc.h 00:02:20.053 TEST_HEADER include/spdk/stdinc.h 00:02:20.053 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.053 TEST_HEADER include/spdk/string.h 00:02:20.053 TEST_HEADER include/spdk/sock.h 00:02:20.053 TEST_HEADER include/spdk/thread.h 00:02:20.053 TEST_HEADER include/spdk/trace.h 00:02:20.053 CC app/nvmf_tgt/nvmf_main.o 00:02:20.053 TEST_HEADER include/spdk/trace_parser.h 00:02:20.053 TEST_HEADER include/spdk/ublk.h 00:02:20.053 TEST_HEADER include/spdk/tree.h 00:02:20.053 TEST_HEADER include/spdk/util.h 00:02:20.053 TEST_HEADER include/spdk/uuid.h 00:02:20.053 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:20.053 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:20.053 TEST_HEADER include/spdk/version.h 00:02:20.053 TEST_HEADER include/spdk/vmd.h 00:02:20.053 TEST_HEADER include/spdk/vhost.h 00:02:20.053 TEST_HEADER include/spdk/zipf.h 00:02:20.053 TEST_HEADER include/spdk/xor.h 00:02:20.053 CXX test/cpp_headers/accel.o 00:02:20.053 CXX test/cpp_headers/accel_module.o 00:02:20.053 CXX test/cpp_headers/assert.o 00:02:20.053 CXX test/cpp_headers/barrier.o 00:02:20.053 CXX test/cpp_headers/bdev.o 00:02:20.053 CXX test/cpp_headers/base64.o 00:02:20.053 CXX test/cpp_headers/bdev_module.o 00:02:20.053 CXX test/cpp_headers/bit_array.o 00:02:20.053 CXX test/cpp_headers/bdev_zone.o 00:02:20.053 CXX test/cpp_headers/bit_pool.o 00:02:20.053 CXX test/cpp_headers/blob_bdev.o 00:02:20.053 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.053 CXX test/cpp_headers/blobfs.o 00:02:20.053 CXX test/cpp_headers/blob.o 00:02:20.053 CXX test/cpp_headers/config.o 00:02:20.053 CXX test/cpp_headers/conf.o 00:02:20.053 CXX test/cpp_headers/cpuset.o 00:02:20.053 CXX test/cpp_headers/crc16.o 00:02:20.053 CXX test/cpp_headers/crc32.o 00:02:20.053 CXX test/cpp_headers/crc64.o 00:02:20.053 CXX test/cpp_headers/dma.o 00:02:20.053 CXX test/cpp_headers/endian.o 00:02:20.053 CXX test/cpp_headers/dif.o 00:02:20.053 CXX test/cpp_headers/env_dpdk.o 00:02:20.053 CXX test/cpp_headers/event.o 00:02:20.053 CXX test/cpp_headers/fd_group.o 00:02:20.053 CXX test/cpp_headers/env.o 00:02:20.053 CXX test/cpp_headers/file.o 00:02:20.053 CXX test/cpp_headers/hexlify.o 00:02:20.053 CXX test/cpp_headers/gpt_spec.o 00:02:20.053 CXX test/cpp_headers/fd.o 00:02:20.053 CXX test/cpp_headers/ftl.o 00:02:20.053 CXX test/cpp_headers/histogram_data.o 00:02:20.053 CXX test/cpp_headers/idxd_spec.o 00:02:20.053 CXX test/cpp_headers/idxd.o 00:02:20.053 CXX test/cpp_headers/init.o 00:02:20.053 CXX test/cpp_headers/ioat.o 00:02:20.053 CXX test/cpp_headers/ioat_spec.o 00:02:20.053 CXX test/cpp_headers/json.o 00:02:20.053 CXX test/cpp_headers/jsonrpc.o 00:02:20.053 CXX test/cpp_headers/likely.o 00:02:20.053 CXX test/cpp_headers/iscsi_spec.o 00:02:20.053 CXX test/cpp_headers/log.o 00:02:20.053 CXX test/cpp_headers/memory.o 00:02:20.053 CXX test/cpp_headers/lvol.o 00:02:20.053 CC test/app/stub/stub.o 00:02:20.053 CC test/app/jsoncat/jsoncat.o 00:02:20.053 CC test/app/histogram_perf/histogram_perf.o 00:02:20.053 CXX test/cpp_headers/mmio.o 00:02:20.053 CC examples/util/zipf/zipf.o 00:02:20.053 CC test/nvme/aer/aer.o 00:02:20.053 CXX test/cpp_headers/nbd.o 00:02:20.053 CC test/nvme/reset/reset.o 00:02:20.053 CC test/nvme/boot_partition/boot_partition.o 00:02:20.053 CXX test/cpp_headers/notify.o 00:02:20.053 CC test/env/memory/memory_ut.o 00:02:20.053 CXX test/cpp_headers/nvme.o 00:02:20.053 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.053 CC test/env/pci/pci_ut.o 00:02:20.053 CXX test/cpp_headers/nvme_intel.o 00:02:20.053 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.053 CC app/fio/nvme/fio_plugin.o 00:02:20.053 CC test/nvme/sgl/sgl.o 00:02:20.053 CXX test/cpp_headers/nvme_spec.o 00:02:20.053 CC test/nvme/simple_copy/simple_copy.o 00:02:20.053 CC test/nvme/e2edp/nvme_dp.o 00:02:20.053 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:20.053 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:20.053 CC test/thread/poller_perf/poller_perf.o 00:02:20.053 CC examples/nvme/hello_world/hello_world.o 00:02:20.053 CC examples/nvme/reconnect/reconnect.o 00:02:20.053 CC test/nvme/startup/startup.o 00:02:20.053 CC test/nvme/err_injection/err_injection.o 00:02:20.053 CC test/nvme/overhead/overhead.o 00:02:20.053 CC test/event/event_perf/event_perf.o 00:02:20.053 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.053 CC examples/accel/perf/accel_perf.o 00:02:20.053 CC examples/ioat/perf/perf.o 00:02:20.053 CC test/nvme/reserve/reserve.o 00:02:20.053 CXX test/cpp_headers/nvme_zns.o 00:02:20.053 CC test/event/reactor_perf/reactor_perf.o 00:02:20.053 CC examples/idxd/perf/perf.o 00:02:20.053 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.053 CC test/nvme/connect_stress/connect_stress.o 00:02:20.053 CC examples/nvme/arbitration/arbitration.o 00:02:20.053 CC test/event/app_repeat/app_repeat.o 00:02:20.053 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:20.053 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.053 CC examples/ioat/verify/verify.o 00:02:20.053 CC examples/sock/hello_world/hello_sock.o 00:02:20.328 CC test/event/reactor/reactor.o 00:02:20.328 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:20.328 CC test/nvme/cuse/cuse.o 00:02:20.328 CC test/env/vtophys/vtophys.o 00:02:20.328 CC examples/vmd/led/led.o 00:02:20.328 CC examples/nvme/hotplug/hotplug.o 00:02:20.328 CC test/app/bdev_svc/bdev_svc.o 00:02:20.328 CC test/nvme/fdp/fdp.o 00:02:20.328 CC test/nvme/compliance/nvme_compliance.o 00:02:20.328 CC app/fio/bdev/fio_plugin.o 00:02:20.328 CC examples/nvme/abort/abort.o 00:02:20.328 CC test/dma/test_dma/test_dma.o 00:02:20.328 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.328 CC test/blobfs/mkfs/mkfs.o 00:02:20.328 CC examples/blob/cli/blobcli.o 00:02:20.328 CC test/accel/dif/dif.o 00:02:20.328 CC examples/bdev/hello_world/hello_bdev.o 00:02:20.328 CC test/bdev/bdevio/bdevio.o 00:02:20.328 CC examples/nvmf/nvmf/nvmf.o 00:02:20.328 CC examples/blob/hello_world/hello_blob.o 00:02:20.328 CC examples/thread/thread/thread_ex.o 00:02:20.328 CC examples/bdev/bdevperf/bdevperf.o 00:02:20.328 CC test/event/scheduler/scheduler.o 00:02:20.595 LINK spdk_lspci 00:02:20.595 LINK rpc_client_test 00:02:20.595 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.595 CC test/lvol/esnap/esnap.o 00:02:20.595 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.867 LINK interrupt_tgt 00:02:20.867 LINK spdk_nvme_discover 00:02:20.867 LINK histogram_perf 00:02:20.867 LINK lsvmd 00:02:20.867 LINK iscsi_tgt 00:02:20.867 LINK event_perf 00:02:20.867 LINK spdk_tgt 00:02:20.867 LINK nvmf_tgt 00:02:20.867 LINK vhost 00:02:21.138 LINK poller_perf 00:02:21.138 LINK jsoncat 00:02:21.138 LINK app_repeat 00:02:21.138 LINK zipf 00:02:21.138 LINK spdk_trace_record 00:02:21.138 LINK stub 00:02:21.138 LINK led 00:02:21.138 LINK vtophys 00:02:21.138 LINK err_injection 00:02:21.138 LINK startup 00:02:21.138 LINK reactor_perf 00:02:21.138 LINK boot_partition 00:02:21.138 LINK fused_ordering 00:02:21.138 LINK reactor 00:02:21.138 LINK ioat_perf 00:02:21.138 LINK hello_world 00:02:21.138 LINK reset 00:02:21.138 LINK mkfs 00:02:21.138 CXX test/cpp_headers/nvmf.o 00:02:21.138 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:21.402 LINK pmr_persistence 00:02:21.402 LINK bdev_svc 00:02:21.402 CXX test/cpp_headers/nvmf_spec.o 00:02:21.402 LINK aer 00:02:21.402 CXX test/cpp_headers/nvmf_transport.o 00:02:21.402 LINK overhead 00:02:21.402 CXX test/cpp_headers/opal_spec.o 00:02:21.402 CXX test/cpp_headers/pci_ids.o 00:02:21.402 CXX test/cpp_headers/opal.o 00:02:21.402 CXX test/cpp_headers/pipe.o 00:02:21.402 CXX test/cpp_headers/queue.o 00:02:21.402 CXX test/cpp_headers/reduce.o 00:02:21.402 CXX test/cpp_headers/rpc.o 00:02:21.402 CXX test/cpp_headers/scheduler.o 00:02:21.402 CXX test/cpp_headers/scsi_spec.o 00:02:21.402 CXX test/cpp_headers/scsi.o 00:02:21.402 LINK nvme_dp 00:02:21.402 LINK cmb_copy 00:02:21.402 CXX test/cpp_headers/sock.o 00:02:21.402 LINK env_dpdk_post_init 00:02:21.402 LINK hello_sock 00:02:21.402 LINK doorbell_aers 00:02:21.402 CXX test/cpp_headers/stdinc.o 00:02:21.402 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:21.402 LINK spdk_dd 00:02:21.402 CXX test/cpp_headers/string.o 00:02:21.402 CXX test/cpp_headers/thread.o 00:02:21.402 CXX test/cpp_headers/trace.o 00:02:21.402 CXX test/cpp_headers/trace_parser.o 00:02:21.402 CXX test/cpp_headers/tree.o 00:02:21.402 LINK hotplug 00:02:21.402 CXX test/cpp_headers/ublk.o 00:02:21.402 LINK scheduler 00:02:21.402 CXX test/cpp_headers/util.o 00:02:21.402 CXX test/cpp_headers/uuid.o 00:02:21.402 CXX test/cpp_headers/version.o 00:02:21.402 LINK reserve 00:02:21.402 LINK spdk_trace 00:02:21.402 LINK verify 00:02:21.402 CXX test/cpp_headers/vfio_user_pci.o 00:02:21.402 LINK simple_copy 00:02:21.402 CXX test/cpp_headers/vfio_user_spec.o 00:02:21.402 CXX test/cpp_headers/vhost.o 00:02:21.402 CXX test/cpp_headers/vmd.o 00:02:21.402 CXX test/cpp_headers/xor.o 00:02:21.402 LINK hello_bdev 00:02:21.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:21.662 LINK sgl 00:02:21.662 LINK thread 00:02:21.662 CXX test/cpp_headers/zipf.o 00:02:21.662 LINK arbitration 00:02:21.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:21.662 LINK nvme_compliance 00:02:21.662 LINK connect_stress 00:02:21.662 LINK abort 00:02:21.662 LINK reconnect 00:02:21.662 LINK test_dma 00:02:21.662 LINK hello_blob 00:02:21.662 LINK pci_ut 00:02:21.662 LINK accel_perf 00:02:21.662 LINK idxd_perf 00:02:21.662 LINK bdevio 00:02:21.923 LINK spdk_bdev 00:02:21.923 LINK nvme_fuzz 00:02:21.923 LINK spdk_nvme 00:02:21.923 LINK mem_callbacks 00:02:21.923 LINK fdp 00:02:21.923 LINK nvmf 00:02:21.923 LINK spdk_nvme_perf 00:02:21.923 LINK nvme_manage 00:02:21.923 LINK blobcli 00:02:21.923 LINK spdk_nvme_identify 00:02:22.183 LINK vhost_fuzz 00:02:22.183 LINK cuse 00:02:22.183 LINK spdk_top 00:02:22.183 LINK dif 00:02:22.183 LINK memory_ut 00:02:22.444 LINK bdevperf 00:02:23.015 LINK iscsi_fuzz 00:02:24.927 LINK esnap 00:02:25.498 00:02:25.498 real 0m48.791s 00:02:25.498 user 6m48.815s 00:02:25.498 sys 5m42.748s 00:02:25.498 17:39:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:25.498 17:39:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.498 ************************************ 00:02:25.498 END TEST make 00:02:25.498 ************************************ 00:02:25.498 17:39:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.498 17:39:29 -- nvmf/common.sh@7 -- # uname -s 00:02:25.498 17:39:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.498 17:39:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.498 17:39:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.498 17:39:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.498 17:39:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.498 17:39:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.498 17:39:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.498 17:39:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.498 17:39:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.498 17:39:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.498 17:39:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:02:25.498 17:39:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:02:25.498 17:39:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.498 17:39:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.498 17:39:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.498 17:39:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.498 17:39:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.498 17:39:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.498 17:39:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.498 17:39:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.498 17:39:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.498 17:39:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.498 17:39:29 -- paths/export.sh@5 -- # export PATH 00:02:25.498 17:39:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.498 17:39:29 -- nvmf/common.sh@46 -- # : 0 00:02:25.498 17:39:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:25.498 17:39:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:25.498 17:39:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:25.498 17:39:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.498 17:39:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.498 17:39:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:25.498 17:39:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:25.498 17:39:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:25.498 17:39:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.498 17:39:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.498 17:39:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.498 17:39:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.498 17:39:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.498 17:39:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.498 17:39:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.498 17:39:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.498 17:39:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.498 17:39:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.498 17:39:29 -- spdk/autotest.sh@48 -- # udevadm_pid=1420345 00:02:25.498 17:39:29 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:25.498 17:39:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.498 17:39:29 -- spdk/autotest.sh@54 -- # echo 1420347 00:02:25.498 17:39:29 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:25.498 17:39:29 -- spdk/autotest.sh@56 -- # echo 1420348 00:02:25.498 17:39:29 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:25.498 17:39:29 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:25.498 17:39:29 -- spdk/autotest.sh@60 -- # echo 1420349 00:02:25.498 17:39:29 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:25.498 17:39:29 -- spdk/autotest.sh@62 -- # echo 1420350 00:02:25.498 17:39:29 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:25.498 17:39:29 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:25.498 17:39:29 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:25.498 17:39:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:25.498 17:39:29 -- common/autotest_common.sh@10 -- # set +x 00:02:25.498 17:39:29 -- spdk/autotest.sh@70 -- # create_test_list 00:02:25.498 17:39:29 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:25.498 17:39:29 -- common/autotest_common.sh@10 -- # set +x 00:02:25.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:25.498 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:25.498 17:39:29 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:25.498 17:39:29 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.498 17:39:29 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.498 17:39:29 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:25.498 17:39:29 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.498 17:39:29 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:25.498 17:39:29 -- common/autotest_common.sh@1440 -- # uname 00:02:25.498 17:39:29 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:25.498 17:39:29 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:25.498 17:39:29 -- common/autotest_common.sh@1460 -- # uname 00:02:25.498 17:39:29 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:25.498 17:39:29 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:25.498 17:39:29 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:25.498 17:39:29 -- spdk/autotest.sh@83 -- # hash lcov 00:02:25.498 17:39:29 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:25.498 17:39:29 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:25.498 --rc lcov_branch_coverage=1 00:02:25.498 --rc lcov_function_coverage=1 00:02:25.498 --rc genhtml_branch_coverage=1 00:02:25.498 --rc genhtml_function_coverage=1 00:02:25.499 --rc genhtml_legend=1 00:02:25.499 --rc geninfo_all_blocks=1 00:02:25.499 ' 00:02:25.499 17:39:29 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:25.499 --rc lcov_branch_coverage=1 00:02:25.499 --rc lcov_function_coverage=1 00:02:25.499 --rc genhtml_branch_coverage=1 00:02:25.499 --rc genhtml_function_coverage=1 00:02:25.499 --rc genhtml_legend=1 00:02:25.499 --rc geninfo_all_blocks=1 00:02:25.499 ' 00:02:25.499 17:39:29 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:25.499 --rc lcov_branch_coverage=1 00:02:25.499 --rc lcov_function_coverage=1 00:02:25.499 --rc genhtml_branch_coverage=1 00:02:25.499 --rc genhtml_function_coverage=1 00:02:25.499 --rc genhtml_legend=1 00:02:25.499 --rc geninfo_all_blocks=1 00:02:25.499 --no-external' 00:02:25.499 17:39:29 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:25.499 --rc lcov_branch_coverage=1 00:02:25.499 --rc lcov_function_coverage=1 00:02:25.499 --rc genhtml_branch_coverage=1 00:02:25.499 --rc genhtml_function_coverage=1 00:02:25.499 --rc genhtml_legend=1 00:02:25.499 --rc geninfo_all_blocks=1 00:02:25.499 --no-external' 00:02:25.499 17:39:29 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:25.759 lcov: LCOV version 1.14 00:02:25.759 17:39:29 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:28.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:28.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:28.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:28.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:28.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:28.301 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:50.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:50.271 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:50.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:50.272 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:51.658 17:39:55 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:51.658 17:39:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:51.658 17:39:55 -- common/autotest_common.sh@10 -- # set +x 00:02:51.658 17:39:55 -- spdk/autotest.sh@102 -- # rm -f 00:02:51.658 17:39:55 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.866 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:02:55.866 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:55.866 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:55.866 17:39:59 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:55.866 17:39:59 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:55.866 17:39:59 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:55.866 17:39:59 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:55.866 17:39:59 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:55.866 17:39:59 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:55.866 17:39:59 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:55.866 17:39:59 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.866 17:39:59 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:55.866 17:39:59 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:55.866 17:39:59 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:55.866 17:39:59 -- spdk/autotest.sh@121 -- # grep -v p 00:02:55.866 17:39:59 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:55.866 17:39:59 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:55.866 17:39:59 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:55.866 17:39:59 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:55.866 17:39:59 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:55.866 No valid GPT data, bailing 00:02:55.866 17:40:00 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:55.866 17:40:00 -- scripts/common.sh@393 -- # pt= 00:02:55.866 17:40:00 -- scripts/common.sh@394 -- # return 1 00:02:55.866 17:40:00 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:55.866 1+0 records in 00:02:55.866 1+0 records out 00:02:55.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0024399 s, 430 MB/s 00:02:55.867 17:40:00 -- spdk/autotest.sh@129 -- # sync 00:02:55.867 17:40:00 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:55.867 17:40:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:55.867 17:40:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:04.007 17:40:07 -- spdk/autotest.sh@135 -- # uname -s 00:03:04.007 17:40:07 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:04.007 17:40:07 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.007 17:40:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.007 17:40:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.007 17:40:07 -- common/autotest_common.sh@10 -- # set +x 00:03:04.007 ************************************ 00:03:04.007 START TEST setup.sh 00:03:04.007 ************************************ 00:03:04.007 17:40:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.007 * Looking for test storage... 00:03:04.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.007 17:40:07 -- setup/test-setup.sh@10 -- # uname -s 00:03:04.007 17:40:07 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:04.007 17:40:07 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:04.007 17:40:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.007 17:40:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.007 17:40:07 -- common/autotest_common.sh@10 -- # set +x 00:03:04.007 ************************************ 00:03:04.007 START TEST acl 00:03:04.007 ************************************ 00:03:04.007 17:40:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:04.007 * Looking for test storage... 00:03:04.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.007 17:40:07 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:04.007 17:40:07 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:04.007 17:40:07 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:04.007 17:40:07 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:04.007 17:40:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:04.007 17:40:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:04.007 17:40:07 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:04.007 17:40:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.007 17:40:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:04.007 17:40:07 -- setup/acl.sh@12 -- # devs=() 00:03:04.007 17:40:07 -- setup/acl.sh@12 -- # declare -a devs 00:03:04.007 17:40:07 -- setup/acl.sh@13 -- # drivers=() 00:03:04.007 17:40:07 -- setup/acl.sh@13 -- # declare -A drivers 00:03:04.007 17:40:07 -- setup/acl.sh@51 -- # setup reset 00:03:04.007 17:40:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.007 17:40:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.274 17:40:11 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:08.274 17:40:11 -- setup/acl.sh@16 -- # local dev driver 00:03:08.274 17:40:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.274 17:40:11 -- setup/acl.sh@15 -- # setup output status 00:03:08.274 17:40:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.274 17:40:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:11.577 Hugepages 00:03:11.577 node hugesize free / total 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 00:03:11.577 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.577 17:40:15 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.577 17:40:15 -- setup/acl.sh@20 -- # continue 00:03:11.577 17:40:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.577 17:40:15 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:11.577 17:40:15 -- setup/acl.sh@54 -- # run_test denied denied 00:03:11.577 17:40:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.577 17:40:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.577 17:40:15 -- common/autotest_common.sh@10 -- # set +x 00:03:11.577 ************************************ 00:03:11.577 START TEST denied 00:03:11.577 ************************************ 00:03:11.577 17:40:15 -- common/autotest_common.sh@1104 -- # denied 00:03:11.577 17:40:15 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:11.577 17:40:15 -- setup/acl.sh@38 -- # setup output config 00:03:11.577 17:40:15 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:11.577 17:40:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.577 17:40:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:15.784 0000:65:00.0 (8086 0a54): Skipping denied controller at 0000:65:00.0 00:03:15.784 17:40:19 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:15.784 17:40:19 -- setup/acl.sh@28 -- # local dev driver 00:03:15.784 17:40:19 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:15.784 17:40:19 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:15.784 17:40:19 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:15.784 17:40:19 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:15.784 17:40:19 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:15.784 17:40:19 -- setup/acl.sh@41 -- # setup reset 00:03:15.784 17:40:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.784 17:40:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.071 00:03:21.071 real 0m9.283s 00:03:21.071 user 0m3.042s 00:03:21.071 sys 0m5.489s 00:03:21.071 17:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.071 17:40:25 -- common/autotest_common.sh@10 -- # set +x 00:03:21.071 ************************************ 00:03:21.071 END TEST denied 00:03:21.071 ************************************ 00:03:21.071 17:40:25 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:21.071 17:40:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.071 17:40:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.071 17:40:25 -- common/autotest_common.sh@10 -- # set +x 00:03:21.071 ************************************ 00:03:21.071 START TEST allowed 00:03:21.071 ************************************ 00:03:21.071 17:40:25 -- common/autotest_common.sh@1104 -- # allowed 00:03:21.071 17:40:25 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:21.071 17:40:25 -- setup/acl.sh@45 -- # setup output config 00:03:21.071 17:40:25 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:21.071 17:40:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.071 17:40:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.654 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.654 17:40:31 -- setup/acl.sh@47 -- # verify 00:03:27.654 17:40:31 -- setup/acl.sh@28 -- # local dev driver 00:03:27.654 17:40:31 -- setup/acl.sh@48 -- # setup reset 00:03:27.654 17:40:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.654 17:40:31 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.864 00:03:31.864 real 0m10.466s 00:03:31.864 user 0m3.095s 00:03:31.864 sys 0m5.590s 00:03:31.864 17:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.864 17:40:35 -- common/autotest_common.sh@10 -- # set +x 00:03:31.864 ************************************ 00:03:31.864 END TEST allowed 00:03:31.864 ************************************ 00:03:31.864 00:03:31.864 real 0m28.326s 00:03:31.864 user 0m9.298s 00:03:31.864 sys 0m16.739s 00:03:31.864 17:40:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.864 17:40:35 -- common/autotest_common.sh@10 -- # set +x 00:03:31.864 ************************************ 00:03:31.864 END TEST acl 00:03:31.864 ************************************ 00:03:31.864 17:40:35 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.864 17:40:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.864 17:40:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.864 17:40:35 -- common/autotest_common.sh@10 -- # set +x 00:03:31.864 ************************************ 00:03:31.864 START TEST hugepages 00:03:31.864 ************************************ 00:03:31.864 17:40:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.864 * Looking for test storage... 00:03:31.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.864 17:40:35 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:31.864 17:40:35 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:31.864 17:40:35 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:31.864 17:40:35 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:31.864 17:40:35 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:31.864 17:40:35 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:31.864 17:40:35 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:31.864 17:40:35 -- setup/common.sh@18 -- # local node= 00:03:31.864 17:40:35 -- setup/common.sh@19 -- # local var val 00:03:31.864 17:40:35 -- setup/common.sh@20 -- # local mem_f mem 00:03:31.864 17:40:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.864 17:40:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.864 17:40:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.864 17:40:35 -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.864 17:40:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 103250744 kB' 'MemAvailable: 106709400 kB' 'Buffers: 9968 kB' 'Cached: 14384624 kB' 'SwapCached: 0 kB' 'Active: 11294404 kB' 'Inactive: 3540376 kB' 'Active(anon): 10846724 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443524 kB' 'Mapped: 185216 kB' 'Shmem: 10406536 kB' 'KReclaimable: 511556 kB' 'Slab: 1235372 kB' 'SReclaimable: 511556 kB' 'SUnreclaim: 723816 kB' 'KernelStack: 25040 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69463440 kB' 'Committed_AS: 12382324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230188 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.864 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.864 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # continue 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # IFS=': ' 00:03:31.865 17:40:35 -- setup/common.sh@31 -- # read -r var val _ 00:03:31.865 17:40:35 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.865 17:40:35 -- setup/common.sh@33 -- # echo 2048 00:03:31.865 17:40:35 -- setup/common.sh@33 -- # return 0 00:03:31.865 17:40:35 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:31.865 17:40:35 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:31.865 17:40:35 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:31.865 17:40:35 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:31.865 17:40:35 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:31.865 17:40:35 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:31.865 17:40:35 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:31.865 17:40:35 -- setup/hugepages.sh@207 -- # get_nodes 00:03:31.865 17:40:35 -- setup/hugepages.sh@27 -- # local node 00:03:31.865 17:40:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.865 17:40:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:31.865 17:40:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.865 17:40:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:31.865 17:40:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:31.865 17:40:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.865 17:40:35 -- setup/hugepages.sh@208 -- # clear_hp 00:03:31.865 17:40:35 -- setup/hugepages.sh@37 -- # local node hp 00:03:31.865 17:40:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.865 17:40:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.865 17:40:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:31.865 17:40:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.865 17:40:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:31.865 17:40:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:31.865 17:40:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.865 17:40:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:31.865 17:40:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.865 17:40:35 -- setup/hugepages.sh@41 -- # echo 0 00:03:31.865 17:40:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:31.865 17:40:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:31.865 17:40:35 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:31.865 17:40:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.865 17:40:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.865 17:40:35 -- common/autotest_common.sh@10 -- # set +x 00:03:31.865 ************************************ 00:03:31.865 START TEST default_setup 00:03:31.866 ************************************ 00:03:31.866 17:40:35 -- common/autotest_common.sh@1104 -- # default_setup 00:03:31.866 17:40:35 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:31.866 17:40:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:31.866 17:40:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:31.866 17:40:35 -- setup/hugepages.sh@51 -- # shift 00:03:31.866 17:40:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:31.866 17:40:35 -- setup/hugepages.sh@52 -- # local node_ids 00:03:31.866 17:40:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.866 17:40:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:31.866 17:40:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:31.866 17:40:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:31.866 17:40:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.866 17:40:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:31.866 17:40:35 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:31.866 17:40:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.866 17:40:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.866 17:40:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:31.866 17:40:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.866 17:40:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:31.866 17:40:35 -- setup/hugepages.sh@73 -- # return 0 00:03:31.866 17:40:35 -- setup/hugepages.sh@137 -- # setup output 00:03:31.866 17:40:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.866 17:40:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.072 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:36.072 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:37.454 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.719 17:40:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.719 17:40:41 -- setup/hugepages.sh@89 -- # local node 00:03:37.719 17:40:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.719 17:40:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.719 17:40:41 -- setup/hugepages.sh@92 -- # local surp 00:03:37.719 17:40:41 -- setup/hugepages.sh@93 -- # local resv 00:03:37.719 17:40:41 -- setup/hugepages.sh@94 -- # local anon 00:03:37.719 17:40:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.719 17:40:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.719 17:40:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.719 17:40:41 -- setup/common.sh@18 -- # local node= 00:03:37.719 17:40:41 -- setup/common.sh@19 -- # local var val 00:03:37.719 17:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.719 17:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.719 17:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.719 17:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.719 17:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.719 17:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105416228 kB' 'MemAvailable: 108874844 kB' 'Buffers: 9968 kB' 'Cached: 14384772 kB' 'SwapCached: 0 kB' 'Active: 11312548 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864868 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461072 kB' 'Mapped: 185164 kB' 'Shmem: 10406684 kB' 'KReclaimable: 511516 kB' 'Slab: 1232648 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721132 kB' 'KernelStack: 25088 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12407324 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.719 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.719 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.720 17:40:41 -- setup/common.sh@33 -- # echo 0 00:03:37.720 17:40:41 -- setup/common.sh@33 -- # return 0 00:03:37.720 17:40:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.720 17:40:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.720 17:40:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.720 17:40:41 -- setup/common.sh@18 -- # local node= 00:03:37.720 17:40:41 -- setup/common.sh@19 -- # local var val 00:03:37.720 17:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.720 17:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.720 17:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.720 17:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.720 17:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.720 17:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105419104 kB' 'MemAvailable: 108877720 kB' 'Buffers: 9968 kB' 'Cached: 14384776 kB' 'SwapCached: 0 kB' 'Active: 11312568 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864888 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461596 kB' 'Mapped: 185164 kB' 'Shmem: 10406688 kB' 'KReclaimable: 511516 kB' 'Slab: 1232656 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721140 kB' 'KernelStack: 25040 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12407336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.720 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.720 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.721 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.721 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.722 17:40:41 -- setup/common.sh@33 -- # echo 0 00:03:37.722 17:40:41 -- setup/common.sh@33 -- # return 0 00:03:37.722 17:40:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.722 17:40:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.722 17:40:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.722 17:40:41 -- setup/common.sh@18 -- # local node= 00:03:37.722 17:40:41 -- setup/common.sh@19 -- # local var val 00:03:37.722 17:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.722 17:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.722 17:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.722 17:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.722 17:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.722 17:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105420500 kB' 'MemAvailable: 108879116 kB' 'Buffers: 9968 kB' 'Cached: 14384788 kB' 'SwapCached: 0 kB' 'Active: 11312568 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864888 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461596 kB' 'Mapped: 185164 kB' 'Shmem: 10406700 kB' 'KReclaimable: 511516 kB' 'Slab: 1232656 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721140 kB' 'KernelStack: 25040 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12407352 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.722 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.722 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.723 17:40:41 -- setup/common.sh@33 -- # echo 0 00:03:37.723 17:40:41 -- setup/common.sh@33 -- # return 0 00:03:37.723 17:40:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.723 17:40:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.723 nr_hugepages=1024 00:03:37.723 17:40:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.723 resv_hugepages=0 00:03:37.723 17:40:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.723 surplus_hugepages=0 00:03:37.723 17:40:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.723 anon_hugepages=0 00:03:37.723 17:40:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.723 17:40:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.723 17:40:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.723 17:40:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.723 17:40:41 -- setup/common.sh@18 -- # local node= 00:03:37.723 17:40:41 -- setup/common.sh@19 -- # local var val 00:03:37.723 17:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.723 17:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.723 17:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.723 17:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.723 17:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.723 17:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105419996 kB' 'MemAvailable: 108878612 kB' 'Buffers: 9968 kB' 'Cached: 14384800 kB' 'SwapCached: 0 kB' 'Active: 11312596 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864916 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461600 kB' 'Mapped: 185164 kB' 'Shmem: 10406712 kB' 'KReclaimable: 511516 kB' 'Slab: 1232656 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721140 kB' 'KernelStack: 25040 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12407368 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230300 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.723 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.723 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.724 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.724 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.725 17:40:41 -- setup/common.sh@33 -- # echo 1024 00:03:37.725 17:40:41 -- setup/common.sh@33 -- # return 0 00:03:37.725 17:40:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.725 17:40:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.725 17:40:41 -- setup/hugepages.sh@27 -- # local node 00:03:37.725 17:40:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.725 17:40:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.725 17:40:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.725 17:40:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.725 17:40:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.725 17:40:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.725 17:40:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.725 17:40:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.725 17:40:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.725 17:40:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.725 17:40:41 -- setup/common.sh@18 -- # local node=0 00:03:37.725 17:40:41 -- setup/common.sh@19 -- # local var val 00:03:37.725 17:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.725 17:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.725 17:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.725 17:40:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.725 17:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.725 17:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59024032 kB' 'MemUsed: 6637968 kB' 'SwapCached: 0 kB' 'Active: 2459256 kB' 'Inactive: 288924 kB' 'Active(anon): 2086788 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2619972 kB' 'Mapped: 140332 kB' 'AnonPages: 131492 kB' 'Shmem: 1958580 kB' 'KernelStack: 13320 kB' 'PageTables: 4784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379120 kB' 'Slab: 739600 kB' 'SReclaimable: 379120 kB' 'SUnreclaim: 360480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.725 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.725 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # continue 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.726 17:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.726 17:40:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.726 17:40:41 -- setup/common.sh@33 -- # echo 0 00:03:37.726 17:40:41 -- setup/common.sh@33 -- # return 0 00:03:37.726 17:40:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.726 17:40:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.726 17:40:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.726 17:40:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.726 17:40:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.726 node0=1024 expecting 1024 00:03:37.726 17:40:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.726 00:03:37.726 real 0m6.132s 00:03:37.726 user 0m1.632s 00:03:37.726 sys 0m2.667s 00:03:37.726 17:40:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.726 17:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:37.726 ************************************ 00:03:37.726 END TEST default_setup 00:03:37.726 ************************************ 00:03:37.726 17:40:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.726 17:40:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:37.726 17:40:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:37.726 17:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:37.987 ************************************ 00:03:37.987 START TEST per_node_1G_alloc 00:03:37.987 ************************************ 00:03:37.987 17:40:41 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:37.987 17:40:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.987 17:40:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.987 17:40:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.987 17:40:41 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.987 17:40:41 -- setup/hugepages.sh@51 -- # shift 00:03:37.987 17:40:41 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.987 17:40:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.987 17:40:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.987 17:40:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.987 17:40:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.987 17:40:41 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.987 17:40:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.987 17:40:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.987 17:40:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.987 17:40:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.987 17:40:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.987 17:40:41 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.987 17:40:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.987 17:40:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.987 17:40:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.987 17:40:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.987 17:40:41 -- setup/hugepages.sh@73 -- # return 0 00:03:37.987 17:40:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.987 17:40:41 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.987 17:40:41 -- setup/hugepages.sh@146 -- # setup output 00:03:37.987 17:40:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.987 17:40:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.197 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.197 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:42.197 17:40:45 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:42.197 17:40:45 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:42.197 17:40:45 -- setup/hugepages.sh@89 -- # local node 00:03:42.197 17:40:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.197 17:40:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.197 17:40:45 -- setup/hugepages.sh@92 -- # local surp 00:03:42.197 17:40:45 -- setup/hugepages.sh@93 -- # local resv 00:03:42.197 17:40:45 -- setup/hugepages.sh@94 -- # local anon 00:03:42.197 17:40:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.197 17:40:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.197 17:40:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.197 17:40:45 -- setup/common.sh@18 -- # local node= 00:03:42.197 17:40:45 -- setup/common.sh@19 -- # local var val 00:03:42.197 17:40:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.197 17:40:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.197 17:40:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.197 17:40:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.197 17:40:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.197 17:40:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.197 17:40:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105422884 kB' 'MemAvailable: 108881500 kB' 'Buffers: 9968 kB' 'Cached: 14384920 kB' 'SwapCached: 0 kB' 'Active: 11311624 kB' 'Inactive: 3540376 kB' 'Active(anon): 10863944 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460428 kB' 'Mapped: 184544 kB' 'Shmem: 10406832 kB' 'KReclaimable: 511516 kB' 'Slab: 1232792 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721276 kB' 'KernelStack: 25104 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12391412 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230348 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.197 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.197 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.198 17:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.198 17:40:46 -- setup/common.sh@33 -- # echo 0 00:03:42.198 17:40:46 -- setup/common.sh@33 -- # return 0 00:03:42.198 17:40:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.198 17:40:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.198 17:40:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.198 17:40:46 -- setup/common.sh@18 -- # local node= 00:03:42.198 17:40:46 -- setup/common.sh@19 -- # local var val 00:03:42.198 17:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.198 17:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.198 17:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.198 17:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.198 17:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.198 17:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.198 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105427560 kB' 'MemAvailable: 108886176 kB' 'Buffers: 9968 kB' 'Cached: 14384924 kB' 'SwapCached: 0 kB' 'Active: 11310932 kB' 'Inactive: 3540376 kB' 'Active(anon): 10863252 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460268 kB' 'Mapped: 184472 kB' 'Shmem: 10406836 kB' 'KReclaimable: 511516 kB' 'Slab: 1232832 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721316 kB' 'KernelStack: 25008 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12392944 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230396 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.199 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.199 17:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.200 17:40:46 -- setup/common.sh@33 -- # echo 0 00:03:42.200 17:40:46 -- setup/common.sh@33 -- # return 0 00:03:42.200 17:40:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.200 17:40:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.200 17:40:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.200 17:40:46 -- setup/common.sh@18 -- # local node= 00:03:42.200 17:40:46 -- setup/common.sh@19 -- # local var val 00:03:42.200 17:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.200 17:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.200 17:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.200 17:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.200 17:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.200 17:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105426892 kB' 'MemAvailable: 108885508 kB' 'Buffers: 9968 kB' 'Cached: 14384940 kB' 'SwapCached: 0 kB' 'Active: 11311044 kB' 'Inactive: 3540376 kB' 'Active(anon): 10863364 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459764 kB' 'Mapped: 184476 kB' 'Shmem: 10406852 kB' 'KReclaimable: 511516 kB' 'Slab: 1232864 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721348 kB' 'KernelStack: 25040 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12391440 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230380 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.200 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.200 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.201 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.201 17:40:46 -- setup/common.sh@33 -- # echo 0 00:03:42.201 17:40:46 -- setup/common.sh@33 -- # return 0 00:03:42.201 17:40:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.201 17:40:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.201 nr_hugepages=1024 00:03:42.201 17:40:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.201 resv_hugepages=0 00:03:42.201 17:40:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.201 surplus_hugepages=0 00:03:42.201 17:40:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.201 anon_hugepages=0 00:03:42.201 17:40:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.201 17:40:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.201 17:40:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.201 17:40:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.201 17:40:46 -- setup/common.sh@18 -- # local node= 00:03:42.201 17:40:46 -- setup/common.sh@19 -- # local var val 00:03:42.201 17:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.201 17:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.201 17:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.201 17:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.201 17:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.201 17:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.201 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105426128 kB' 'MemAvailable: 108884744 kB' 'Buffers: 9968 kB' 'Cached: 14384956 kB' 'SwapCached: 0 kB' 'Active: 11311756 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864076 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460288 kB' 'Mapped: 184488 kB' 'Shmem: 10406868 kB' 'KReclaimable: 511516 kB' 'Slab: 1232864 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721348 kB' 'KernelStack: 25232 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12392976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230460 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.202 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.202 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.203 17:40:46 -- setup/common.sh@33 -- # echo 1024 00:03:42.203 17:40:46 -- setup/common.sh@33 -- # return 0 00:03:42.203 17:40:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.203 17:40:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.203 17:40:46 -- setup/hugepages.sh@27 -- # local node 00:03:42.203 17:40:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.203 17:40:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.203 17:40:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.203 17:40:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.203 17:40:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.203 17:40:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.203 17:40:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.203 17:40:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.203 17:40:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.203 17:40:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.203 17:40:46 -- setup/common.sh@18 -- # local node=0 00:03:42.203 17:40:46 -- setup/common.sh@19 -- # local var val 00:03:42.203 17:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.203 17:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.203 17:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.203 17:40:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.203 17:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.203 17:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60075412 kB' 'MemUsed: 5586588 kB' 'SwapCached: 0 kB' 'Active: 2456968 kB' 'Inactive: 288924 kB' 'Active(anon): 2084500 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2619996 kB' 'Mapped: 139752 kB' 'AnonPages: 129028 kB' 'Shmem: 1958604 kB' 'KernelStack: 13304 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379120 kB' 'Slab: 739752 kB' 'SReclaimable: 379120 kB' 'SUnreclaim: 360632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.203 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.203 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@33 -- # echo 0 00:03:42.204 17:40:46 -- setup/common.sh@33 -- # return 0 00:03:42.204 17:40:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.204 17:40:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.204 17:40:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.204 17:40:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.204 17:40:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.204 17:40:46 -- setup/common.sh@18 -- # local node=1 00:03:42.204 17:40:46 -- setup/common.sh@19 -- # local var val 00:03:42.204 17:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.204 17:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.204 17:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.204 17:40:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.204 17:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.204 17:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681980 kB' 'MemFree: 45351016 kB' 'MemUsed: 15330964 kB' 'SwapCached: 0 kB' 'Active: 8854232 kB' 'Inactive: 3251452 kB' 'Active(anon): 8779020 kB' 'Inactive(anon): 0 kB' 'Active(file): 75212 kB' 'Inactive(file): 3251452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11774952 kB' 'Mapped: 44724 kB' 'AnonPages: 330856 kB' 'Shmem: 8448288 kB' 'KernelStack: 11800 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132396 kB' 'Slab: 493112 kB' 'SReclaimable: 132396 kB' 'SUnreclaim: 360716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.204 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.204 17:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # continue 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.205 17:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.205 17:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.205 17:40:46 -- setup/common.sh@33 -- # echo 0 00:03:42.205 17:40:46 -- setup/common.sh@33 -- # return 0 00:03:42.205 17:40:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.205 17:40:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.205 17:40:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.205 17:40:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.205 17:40:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.205 node0=512 expecting 512 00:03:42.205 17:40:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.205 17:40:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.205 17:40:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.205 17:40:46 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.205 node1=512 expecting 512 00:03:42.205 17:40:46 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.205 00:03:42.205 real 0m4.185s 00:03:42.205 user 0m1.603s 00:03:42.205 sys 0m2.653s 00:03:42.205 17:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.205 17:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:42.205 ************************************ 00:03:42.205 END TEST per_node_1G_alloc 00:03:42.205 ************************************ 00:03:42.205 17:40:46 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:42.205 17:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.205 17:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.205 17:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:42.205 ************************************ 00:03:42.205 START TEST even_2G_alloc 00:03:42.205 ************************************ 00:03:42.205 17:40:46 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:42.205 17:40:46 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:42.205 17:40:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.205 17:40:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.205 17:40:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.205 17:40:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.205 17:40:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.205 17:40:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.205 17:40:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.205 17:40:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.205 17:40:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.205 17:40:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.205 17:40:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.205 17:40:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.206 17:40:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.206 17:40:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.206 17:40:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.206 17:40:46 -- setup/hugepages.sh@83 -- # : 512 00:03:42.206 17:40:46 -- setup/hugepages.sh@84 -- # : 1 00:03:42.206 17:40:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.206 17:40:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.206 17:40:46 -- setup/hugepages.sh@83 -- # : 0 00:03:42.206 17:40:46 -- setup/hugepages.sh@84 -- # : 0 00:03:42.206 17:40:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.206 17:40:46 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:42.206 17:40:46 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:42.206 17:40:46 -- setup/hugepages.sh@153 -- # setup output 00:03:42.206 17:40:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.206 17:40:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.468 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.468 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:46.468 17:40:49 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:46.468 17:40:49 -- setup/hugepages.sh@89 -- # local node 00:03:46.468 17:40:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.468 17:40:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.468 17:40:49 -- setup/hugepages.sh@92 -- # local surp 00:03:46.468 17:40:49 -- setup/hugepages.sh@93 -- # local resv 00:03:46.468 17:40:49 -- setup/hugepages.sh@94 -- # local anon 00:03:46.468 17:40:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.468 17:40:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.468 17:40:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.468 17:40:50 -- setup/common.sh@18 -- # local node= 00:03:46.468 17:40:50 -- setup/common.sh@19 -- # local var val 00:03:46.468 17:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.468 17:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.468 17:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.468 17:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.468 17:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.468 17:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105414428 kB' 'MemAvailable: 108873044 kB' 'Buffers: 9968 kB' 'Cached: 14385060 kB' 'SwapCached: 0 kB' 'Active: 11312076 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864396 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460660 kB' 'Mapped: 184616 kB' 'Shmem: 10406972 kB' 'KReclaimable: 511516 kB' 'Slab: 1233344 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721828 kB' 'KernelStack: 25152 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12389164 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230380 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.468 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.468 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.469 17:40:50 -- setup/common.sh@33 -- # echo 0 00:03:46.469 17:40:50 -- setup/common.sh@33 -- # return 0 00:03:46.469 17:40:50 -- setup/hugepages.sh@97 -- # anon=0 00:03:46.469 17:40:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.469 17:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.469 17:40:50 -- setup/common.sh@18 -- # local node= 00:03:46.469 17:40:50 -- setup/common.sh@19 -- # local var val 00:03:46.469 17:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.469 17:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.469 17:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.469 17:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.469 17:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.469 17:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105418064 kB' 'MemAvailable: 108876680 kB' 'Buffers: 9968 kB' 'Cached: 14385064 kB' 'SwapCached: 0 kB' 'Active: 11311596 kB' 'Inactive: 3540376 kB' 'Active(anon): 10863916 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460404 kB' 'Mapped: 184504 kB' 'Shmem: 10406976 kB' 'KReclaimable: 511516 kB' 'Slab: 1233296 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721780 kB' 'KernelStack: 25008 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12389176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230348 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.469 17:40:50 -- setup/common.sh@33 -- # echo 0 00:03:46.469 17:40:50 -- setup/common.sh@33 -- # return 0 00:03:46.469 17:40:50 -- setup/hugepages.sh@99 -- # surp=0 00:03:46.469 17:40:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.469 17:40:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.469 17:40:50 -- setup/common.sh@18 -- # local node= 00:03:46.469 17:40:50 -- setup/common.sh@19 -- # local var val 00:03:46.469 17:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.469 17:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.469 17:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.469 17:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.469 17:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.469 17:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105417844 kB' 'MemAvailable: 108876460 kB' 'Buffers: 9968 kB' 'Cached: 14385076 kB' 'SwapCached: 0 kB' 'Active: 11311552 kB' 'Inactive: 3540376 kB' 'Active(anon): 10863872 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460308 kB' 'Mapped: 184504 kB' 'Shmem: 10406988 kB' 'KReclaimable: 511516 kB' 'Slab: 1233296 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721780 kB' 'KernelStack: 24992 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12389192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230332 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.469 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.469 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.470 17:40:50 -- setup/common.sh@33 -- # echo 0 00:03:46.470 17:40:50 -- setup/common.sh@33 -- # return 0 00:03:46.470 17:40:50 -- setup/hugepages.sh@100 -- # resv=0 00:03:46.470 17:40:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:46.470 nr_hugepages=1024 00:03:46.470 17:40:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.470 resv_hugepages=0 00:03:46.470 17:40:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.470 surplus_hugepages=0 00:03:46.470 17:40:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.470 anon_hugepages=0 00:03:46.470 17:40:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.470 17:40:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:46.470 17:40:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.470 17:40:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.470 17:40:50 -- setup/common.sh@18 -- # local node= 00:03:46.470 17:40:50 -- setup/common.sh@19 -- # local var val 00:03:46.470 17:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.470 17:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.470 17:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.470 17:40:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.470 17:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.470 17:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105417844 kB' 'MemAvailable: 108876460 kB' 'Buffers: 9968 kB' 'Cached: 14385088 kB' 'SwapCached: 0 kB' 'Active: 11311592 kB' 'Inactive: 3540376 kB' 'Active(anon): 10863912 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460348 kB' 'Mapped: 184504 kB' 'Shmem: 10407000 kB' 'KReclaimable: 511516 kB' 'Slab: 1233296 kB' 'SReclaimable: 511516 kB' 'SUnreclaim: 721780 kB' 'KernelStack: 25008 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12389204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230332 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.470 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.470 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.471 17:40:50 -- setup/common.sh@33 -- # echo 1024 00:03:46.471 17:40:50 -- setup/common.sh@33 -- # return 0 00:03:46.471 17:40:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.471 17:40:50 -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.471 17:40:50 -- setup/hugepages.sh@27 -- # local node 00:03:46.471 17:40:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.471 17:40:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.471 17:40:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.471 17:40:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.471 17:40:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.471 17:40:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.471 17:40:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.471 17:40:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.471 17:40:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.471 17:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.471 17:40:50 -- setup/common.sh@18 -- # local node=0 00:03:46.471 17:40:50 -- setup/common.sh@19 -- # local var val 00:03:46.471 17:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.471 17:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.471 17:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.471 17:40:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.471 17:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.471 17:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60074008 kB' 'MemUsed: 5587992 kB' 'SwapCached: 0 kB' 'Active: 2456704 kB' 'Inactive: 288924 kB' 'Active(anon): 2084236 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2620004 kB' 'Mapped: 139760 kB' 'AnonPages: 128880 kB' 'Shmem: 1958612 kB' 'KernelStack: 13160 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379120 kB' 'Slab: 740300 kB' 'SReclaimable: 379120 kB' 'SUnreclaim: 361180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@33 -- # echo 0 00:03:46.471 17:40:50 -- setup/common.sh@33 -- # return 0 00:03:46.471 17:40:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.471 17:40:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.471 17:40:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.471 17:40:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.471 17:40:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.471 17:40:50 -- setup/common.sh@18 -- # local node=1 00:03:46.471 17:40:50 -- setup/common.sh@19 -- # local var val 00:03:46.471 17:40:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:46.471 17:40:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.471 17:40:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.471 17:40:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.471 17:40:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.471 17:40:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681980 kB' 'MemFree: 45346556 kB' 'MemUsed: 15335424 kB' 'SwapCached: 0 kB' 'Active: 8857224 kB' 'Inactive: 3251452 kB' 'Active(anon): 8782012 kB' 'Inactive(anon): 0 kB' 'Active(file): 75212 kB' 'Inactive(file): 3251452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11775080 kB' 'Mapped: 45228 kB' 'AnonPages: 333784 kB' 'Shmem: 8448416 kB' 'KernelStack: 11784 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132396 kB' 'Slab: 492996 kB' 'SReclaimable: 132396 kB' 'SUnreclaim: 360600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.471 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.471 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # continue 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 17:40:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 17:40:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.472 17:40:50 -- setup/common.sh@33 -- # echo 0 00:03:46.472 17:40:50 -- setup/common.sh@33 -- # return 0 00:03:46.472 17:40:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.472 17:40:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.472 17:40:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.472 17:40:50 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.472 node0=512 expecting 512 00:03:46.472 17:40:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.472 17:40:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.472 17:40:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.472 17:40:50 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:46.472 node1=512 expecting 512 00:03:46.472 17:40:50 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.472 00:03:46.472 real 0m3.981s 00:03:46.472 user 0m1.500s 00:03:46.472 sys 0m2.540s 00:03:46.472 17:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.472 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:03:46.472 ************************************ 00:03:46.472 END TEST even_2G_alloc 00:03:46.472 ************************************ 00:03:46.472 17:40:50 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:46.472 17:40:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:46.472 17:40:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.472 17:40:50 -- common/autotest_common.sh@10 -- # set +x 00:03:46.472 ************************************ 00:03:46.472 START TEST odd_alloc 00:03:46.472 ************************************ 00:03:46.472 17:40:50 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:46.472 17:40:50 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:46.472 17:40:50 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:46.472 17:40:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:46.472 17:40:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:46.472 17:40:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:46.472 17:40:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.472 17:40:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:46.472 17:40:50 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.472 17:40:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.472 17:40:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.472 17:40:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:46.472 17:40:50 -- setup/hugepages.sh@83 -- # : 513 00:03:46.472 17:40:50 -- setup/hugepages.sh@84 -- # : 1 00:03:46.472 17:40:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:46.472 17:40:50 -- setup/hugepages.sh@83 -- # : 0 00:03:46.472 17:40:50 -- setup/hugepages.sh@84 -- # : 0 00:03:46.472 17:40:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:46.472 17:40:50 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:46.472 17:40:50 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:46.472 17:40:50 -- setup/hugepages.sh@160 -- # setup output 00:03:46.472 17:40:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.472 17:40:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.681 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:50.681 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:50.681 17:40:54 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:50.681 17:40:54 -- setup/hugepages.sh@89 -- # local node 00:03:50.681 17:40:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.681 17:40:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.681 17:40:54 -- setup/hugepages.sh@92 -- # local surp 00:03:50.681 17:40:54 -- setup/hugepages.sh@93 -- # local resv 00:03:50.681 17:40:54 -- setup/hugepages.sh@94 -- # local anon 00:03:50.681 17:40:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.681 17:40:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.681 17:40:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.681 17:40:54 -- setup/common.sh@18 -- # local node= 00:03:50.681 17:40:54 -- setup/common.sh@19 -- # local var val 00:03:50.681 17:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.681 17:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.681 17:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.681 17:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.681 17:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.681 17:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.681 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 17:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105401480 kB' 'MemAvailable: 108860064 kB' 'Buffers: 9968 kB' 'Cached: 14385212 kB' 'SwapCached: 0 kB' 'Active: 11312696 kB' 'Inactive: 3540376 kB' 'Active(anon): 10865016 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461256 kB' 'Mapped: 184504 kB' 'Shmem: 10407124 kB' 'KReclaimable: 511484 kB' 'Slab: 1233248 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721764 kB' 'KernelStack: 25024 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70510992 kB' 'Committed_AS: 12390092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:50.681 17:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.681 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.681 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 17:40:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.681 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.681 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.681 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.681 17:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.681 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.682 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.682 17:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.682 17:40:54 -- setup/common.sh@33 -- # echo 0 00:03:50.682 17:40:54 -- setup/common.sh@33 -- # return 0 00:03:50.682 17:40:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.682 17:40:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.682 17:40:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.682 17:40:54 -- setup/common.sh@18 -- # local node= 00:03:50.682 17:40:54 -- setup/common.sh@19 -- # local var val 00:03:50.682 17:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.682 17:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.682 17:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.682 17:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.682 17:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.682 17:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105405256 kB' 'MemAvailable: 108863840 kB' 'Buffers: 9968 kB' 'Cached: 14385216 kB' 'SwapCached: 0 kB' 'Active: 11312560 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864880 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461180 kB' 'Mapped: 184504 kB' 'Shmem: 10407128 kB' 'KReclaimable: 511484 kB' 'Slab: 1233316 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721832 kB' 'KernelStack: 25008 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70510992 kB' 'Committed_AS: 12390104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230252 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.683 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.683 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.684 17:40:54 -- setup/common.sh@33 -- # echo 0 00:03:50.684 17:40:54 -- setup/common.sh@33 -- # return 0 00:03:50.684 17:40:54 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.684 17:40:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.684 17:40:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.684 17:40:54 -- setup/common.sh@18 -- # local node= 00:03:50.684 17:40:54 -- setup/common.sh@19 -- # local var val 00:03:50.684 17:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.684 17:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.684 17:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.684 17:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.684 17:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.684 17:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.684 17:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105405196 kB' 'MemAvailable: 108863780 kB' 'Buffers: 9968 kB' 'Cached: 14385228 kB' 'SwapCached: 0 kB' 'Active: 11312624 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864944 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461180 kB' 'Mapped: 184504 kB' 'Shmem: 10407140 kB' 'KReclaimable: 511484 kB' 'Slab: 1233316 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721832 kB' 'KernelStack: 25008 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70510992 kB' 'Committed_AS: 12390120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230252 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.684 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.685 17:40:54 -- setup/common.sh@33 -- # echo 0 00:03:50.685 17:40:54 -- setup/common.sh@33 -- # return 0 00:03:50.685 17:40:54 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.685 17:40:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:50.685 nr_hugepages=1025 00:03:50.685 17:40:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.685 resv_hugepages=0 00:03:50.685 17:40:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.685 surplus_hugepages=0 00:03:50.685 17:40:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.685 anon_hugepages=0 00:03:50.685 17:40:54 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.685 17:40:54 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:50.685 17:40:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.685 17:40:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.685 17:40:54 -- setup/common.sh@18 -- # local node= 00:03:50.685 17:40:54 -- setup/common.sh@19 -- # local var val 00:03:50.685 17:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.685 17:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.685 17:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.685 17:40:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.685 17:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.685 17:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105404944 kB' 'MemAvailable: 108863528 kB' 'Buffers: 9968 kB' 'Cached: 14385252 kB' 'SwapCached: 0 kB' 'Active: 11312244 kB' 'Inactive: 3540376 kB' 'Active(anon): 10864564 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460784 kB' 'Mapped: 184504 kB' 'Shmem: 10407164 kB' 'KReclaimable: 511484 kB' 'Slab: 1233316 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721832 kB' 'KernelStack: 24992 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70510992 kB' 'Committed_AS: 12390136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230252 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.686 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.687 17:40:54 -- setup/common.sh@33 -- # echo 1025 00:03:50.687 17:40:54 -- setup/common.sh@33 -- # return 0 00:03:50.687 17:40:54 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:50.687 17:40:54 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.687 17:40:54 -- setup/hugepages.sh@27 -- # local node 00:03:50.687 17:40:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.687 17:40:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:50.687 17:40:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.687 17:40:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:50.687 17:40:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.687 17:40:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.687 17:40:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.687 17:40:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.687 17:40:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.687 17:40:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.687 17:40:54 -- setup/common.sh@18 -- # local node=0 00:03:50.687 17:40:54 -- setup/common.sh@19 -- # local var val 00:03:50.687 17:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.687 17:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.687 17:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.687 17:40:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.687 17:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.687 17:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60067492 kB' 'MemUsed: 5594508 kB' 'SwapCached: 0 kB' 'Active: 2457948 kB' 'Inactive: 288924 kB' 'Active(anon): 2085480 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2620084 kB' 'Mapped: 139776 kB' 'AnonPages: 130128 kB' 'Shmem: 1958692 kB' 'KernelStack: 13224 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379088 kB' 'Slab: 740372 kB' 'SReclaimable: 379088 kB' 'SUnreclaim: 361284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@33 -- # echo 0 00:03:50.688 17:40:54 -- setup/common.sh@33 -- # return 0 00:03:50.688 17:40:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.688 17:40:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.688 17:40:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.688 17:40:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:50.688 17:40:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.688 17:40:54 -- setup/common.sh@18 -- # local node=1 00:03:50.688 17:40:54 -- setup/common.sh@19 -- # local var val 00:03:50.688 17:40:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.688 17:40:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.688 17:40:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:50.688 17:40:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:50.688 17:40:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.688 17:40:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681980 kB' 'MemFree: 45337224 kB' 'MemUsed: 15344756 kB' 'SwapCached: 0 kB' 'Active: 8854684 kB' 'Inactive: 3251452 kB' 'Active(anon): 8779472 kB' 'Inactive(anon): 0 kB' 'Active(file): 75212 kB' 'Inactive(file): 3251452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11775140 kB' 'Mapped: 44728 kB' 'AnonPages: 331052 kB' 'Shmem: 8448476 kB' 'KernelStack: 11784 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132396 kB' 'Slab: 492944 kB' 'SReclaimable: 132396 kB' 'SUnreclaim: 360548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 17:40:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # continue 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 17:40:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 17:40:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.689 17:40:54 -- setup/common.sh@33 -- # echo 0 00:03:50.689 17:40:54 -- setup/common.sh@33 -- # return 0 00:03:50.689 17:40:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.689 17:40:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.689 17:40:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.689 17:40:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.689 17:40:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:50.689 node0=512 expecting 513 00:03:50.689 17:40:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.689 17:40:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.689 17:40:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.689 17:40:54 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:50.689 node1=513 expecting 512 00:03:50.689 17:40:54 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:50.689 00:03:50.689 real 0m4.239s 00:03:50.689 user 0m1.680s 00:03:50.689 sys 0m2.637s 00:03:50.689 17:40:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.689 17:40:54 -- common/autotest_common.sh@10 -- # set +x 00:03:50.689 ************************************ 00:03:50.689 END TEST odd_alloc 00:03:50.689 ************************************ 00:03:50.689 17:40:54 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:50.689 17:40:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:50.689 17:40:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:50.689 17:40:54 -- common/autotest_common.sh@10 -- # set +x 00:03:50.689 ************************************ 00:03:50.689 START TEST custom_alloc 00:03:50.689 ************************************ 00:03:50.689 17:40:54 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:50.689 17:40:54 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:50.689 17:40:54 -- setup/hugepages.sh@169 -- # local node 00:03:50.689 17:40:54 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:50.689 17:40:54 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:50.689 17:40:54 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:50.689 17:40:54 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:50.689 17:40:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.689 17:40:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.689 17:40:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.689 17:40:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.689 17:40:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.689 17:40:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.689 17:40:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.689 17:40:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.689 17:40:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.689 17:40:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.689 17:40:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.689 17:40:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.689 17:40:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:50.689 17:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.689 17:40:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.689 17:40:54 -- setup/hugepages.sh@83 -- # : 256 00:03:50.689 17:40:54 -- setup/hugepages.sh@84 -- # : 1 00:03:50.689 17:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:50.690 17:40:54 -- setup/hugepages.sh@83 -- # : 0 00:03:50.690 17:40:54 -- setup/hugepages.sh@84 -- # : 0 00:03:50.690 17:40:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:50.690 17:40:54 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:50.690 17:40:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:50.690 17:40:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:50.690 17:40:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:50.690 17:40:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.690 17:40:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.690 17:40:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.690 17:40:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.690 17:40:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.690 17:40:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.690 17:40:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.690 17:40:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.690 17:40:54 -- setup/hugepages.sh@78 -- # return 0 00:03:50.690 17:40:54 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:50.690 17:40:54 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.690 17:40:54 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.690 17:40:54 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:50.690 17:40:54 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:50.690 17:40:54 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:50.690 17:40:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:50.690 17:40:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.690 17:40:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:50.690 17:40:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.690 17:40:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.690 17:40:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.690 17:40:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:50.690 17:40:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.690 17:40:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:50.690 17:40:54 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:50.690 17:40:54 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:50.690 17:40:54 -- setup/hugepages.sh@78 -- # return 0 00:03:50.690 17:40:54 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:50.690 17:40:54 -- setup/hugepages.sh@187 -- # setup output 00:03:50.690 17:40:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.690 17:40:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.901 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.901 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.901 17:40:58 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:54.901 17:40:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:54.901 17:40:58 -- setup/hugepages.sh@89 -- # local node 00:03:54.901 17:40:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.901 17:40:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.901 17:40:58 -- setup/hugepages.sh@92 -- # local surp 00:03:54.901 17:40:58 -- setup/hugepages.sh@93 -- # local resv 00:03:54.901 17:40:58 -- setup/hugepages.sh@94 -- # local anon 00:03:54.901 17:40:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.901 17:40:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.901 17:40:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.901 17:40:58 -- setup/common.sh@18 -- # local node= 00:03:54.901 17:40:58 -- setup/common.sh@19 -- # local var val 00:03:54.901 17:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.901 17:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.901 17:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.901 17:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.901 17:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.901 17:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.901 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.901 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 104356476 kB' 'MemAvailable: 107815060 kB' 'Buffers: 9968 kB' 'Cached: 14385356 kB' 'SwapCached: 0 kB' 'Active: 11315060 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867380 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462912 kB' 'Mapped: 184608 kB' 'Shmem: 10407268 kB' 'KReclaimable: 511484 kB' 'Slab: 1233296 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721812 kB' 'KernelStack: 25168 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987728 kB' 'Committed_AS: 12394308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.902 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.902 17:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.903 17:40:58 -- setup/common.sh@33 -- # echo 0 00:03:54.903 17:40:58 -- setup/common.sh@33 -- # return 0 00:03:54.903 17:40:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:54.903 17:40:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.903 17:40:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.903 17:40:58 -- setup/common.sh@18 -- # local node= 00:03:54.903 17:40:58 -- setup/common.sh@19 -- # local var val 00:03:54.903 17:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.903 17:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.903 17:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.903 17:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.903 17:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.903 17:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 104358848 kB' 'MemAvailable: 107817432 kB' 'Buffers: 9968 kB' 'Cached: 14385360 kB' 'SwapCached: 0 kB' 'Active: 11314988 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867308 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462916 kB' 'Mapped: 184596 kB' 'Shmem: 10407272 kB' 'KReclaimable: 511484 kB' 'Slab: 1233264 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721780 kB' 'KernelStack: 25152 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987728 kB' 'Committed_AS: 12394060 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.903 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.903 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.904 17:40:58 -- setup/common.sh@33 -- # echo 0 00:03:54.904 17:40:58 -- setup/common.sh@33 -- # return 0 00:03:54.904 17:40:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:54.904 17:40:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.904 17:40:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.904 17:40:58 -- setup/common.sh@18 -- # local node= 00:03:54.904 17:40:58 -- setup/common.sh@19 -- # local var val 00:03:54.904 17:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.904 17:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.904 17:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.904 17:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.904 17:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.904 17:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 104361276 kB' 'MemAvailable: 107819860 kB' 'Buffers: 9968 kB' 'Cached: 14385372 kB' 'SwapCached: 0 kB' 'Active: 11314632 kB' 'Inactive: 3540376 kB' 'Active(anon): 10866952 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462944 kB' 'Mapped: 184520 kB' 'Shmem: 10407284 kB' 'KReclaimable: 511484 kB' 'Slab: 1233280 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721796 kB' 'KernelStack: 25200 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987728 kB' 'Committed_AS: 12394076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.904 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.904 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.905 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.905 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.905 17:40:58 -- setup/common.sh@33 -- # echo 0 00:03:54.905 17:40:58 -- setup/common.sh@33 -- # return 0 00:03:54.905 17:40:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:54.906 17:40:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:54.906 nr_hugepages=1536 00:03:54.906 17:40:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.906 resv_hugepages=0 00:03:54.906 17:40:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.906 surplus_hugepages=0 00:03:54.906 17:40:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.906 anon_hugepages=0 00:03:54.906 17:40:58 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.906 17:40:58 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:54.906 17:40:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.906 17:40:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.906 17:40:58 -- setup/common.sh@18 -- # local node= 00:03:54.906 17:40:58 -- setup/common.sh@19 -- # local var val 00:03:54.906 17:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.906 17:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.906 17:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.906 17:40:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.906 17:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.906 17:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 104361576 kB' 'MemAvailable: 107820160 kB' 'Buffers: 9968 kB' 'Cached: 14385372 kB' 'SwapCached: 0 kB' 'Active: 11314280 kB' 'Inactive: 3540376 kB' 'Active(anon): 10866600 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462576 kB' 'Mapped: 184520 kB' 'Shmem: 10407284 kB' 'KReclaimable: 511484 kB' 'Slab: 1233280 kB' 'SReclaimable: 511484 kB' 'SUnreclaim: 721796 kB' 'KernelStack: 25120 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69987728 kB' 'Committed_AS: 12394336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230220 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.906 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.906 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.907 17:40:58 -- setup/common.sh@33 -- # echo 1536 00:03:54.907 17:40:58 -- setup/common.sh@33 -- # return 0 00:03:54.907 17:40:58 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:54.907 17:40:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.907 17:40:58 -- setup/hugepages.sh@27 -- # local node 00:03:54.907 17:40:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.907 17:40:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.907 17:40:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.907 17:40:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.907 17:40:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.907 17:40:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.907 17:40:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.907 17:40:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.907 17:40:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.907 17:40:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.907 17:40:58 -- setup/common.sh@18 -- # local node=0 00:03:54.907 17:40:58 -- setup/common.sh@19 -- # local var val 00:03:54.907 17:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.907 17:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.907 17:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.907 17:40:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.907 17:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.907 17:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 60078516 kB' 'MemUsed: 5583484 kB' 'SwapCached: 0 kB' 'Active: 2459704 kB' 'Inactive: 288924 kB' 'Active(anon): 2087236 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2620180 kB' 'Mapped: 139788 kB' 'AnonPages: 131628 kB' 'Shmem: 1958788 kB' 'KernelStack: 13352 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 379088 kB' 'Slab: 740564 kB' 'SReclaimable: 379088 kB' 'SUnreclaim: 361476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.907 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.907 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@33 -- # echo 0 00:03:54.908 17:40:58 -- setup/common.sh@33 -- # return 0 00:03:54.908 17:40:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.908 17:40:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.908 17:40:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.908 17:40:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.908 17:40:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.908 17:40:58 -- setup/common.sh@18 -- # local node=1 00:03:54.908 17:40:58 -- setup/common.sh@19 -- # local var val 00:03:54.908 17:40:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.908 17:40:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.908 17:40:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.908 17:40:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.908 17:40:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.908 17:40:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60681980 kB' 'MemFree: 44286892 kB' 'MemUsed: 16395088 kB' 'SwapCached: 0 kB' 'Active: 8854836 kB' 'Inactive: 3251452 kB' 'Active(anon): 8779624 kB' 'Inactive(anon): 0 kB' 'Active(file): 75212 kB' 'Inactive(file): 3251452 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11775200 kB' 'Mapped: 44672 kB' 'AnonPages: 331216 kB' 'Shmem: 8448536 kB' 'KernelStack: 11784 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132396 kB' 'Slab: 492716 kB' 'SReclaimable: 132396 kB' 'SUnreclaim: 360320 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.908 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.908 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # continue 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 17:40:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 17:40:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.909 17:40:58 -- setup/common.sh@33 -- # echo 0 00:03:54.909 17:40:58 -- setup/common.sh@33 -- # return 0 00:03:54.909 17:40:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.909 17:40:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.909 17:40:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.909 17:40:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.909 17:40:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.909 node0=512 expecting 512 00:03:54.909 17:40:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.909 17:40:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.909 17:40:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.909 17:40:58 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:54.909 node1=1024 expecting 1024 00:03:54.909 17:40:58 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:54.909 00:03:54.909 real 0m4.126s 00:03:54.909 user 0m1.647s 00:03:54.909 sys 0m2.551s 00:03:54.909 17:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.909 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:03:54.909 ************************************ 00:03:54.909 END TEST custom_alloc 00:03:54.909 ************************************ 00:03:54.909 17:40:58 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:54.909 17:40:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:54.909 17:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:54.909 17:40:58 -- common/autotest_common.sh@10 -- # set +x 00:03:54.909 ************************************ 00:03:54.909 START TEST no_shrink_alloc 00:03:54.909 ************************************ 00:03:54.909 17:40:58 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:54.909 17:40:58 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:54.909 17:40:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.909 17:40:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.909 17:40:58 -- setup/hugepages.sh@51 -- # shift 00:03:54.909 17:40:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.909 17:40:58 -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.910 17:40:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.910 17:40:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.910 17:40:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.910 17:40:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.910 17:40:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.910 17:40:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.910 17:40:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.910 17:40:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.910 17:40:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.910 17:40:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.910 17:40:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.910 17:40:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.910 17:40:58 -- setup/hugepages.sh@73 -- # return 0 00:03:54.910 17:40:58 -- setup/hugepages.sh@198 -- # setup output 00:03:54.910 17:40:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.910 17:40:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.121 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.121 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.121 17:41:02 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:59.121 17:41:02 -- setup/hugepages.sh@89 -- # local node 00:03:59.121 17:41:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.121 17:41:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.121 17:41:02 -- setup/hugepages.sh@92 -- # local surp 00:03:59.121 17:41:02 -- setup/hugepages.sh@93 -- # local resv 00:03:59.121 17:41:02 -- setup/hugepages.sh@94 -- # local anon 00:03:59.121 17:41:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.121 17:41:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.121 17:41:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.121 17:41:02 -- setup/common.sh@18 -- # local node= 00:03:59.121 17:41:02 -- setup/common.sh@19 -- # local var val 00:03:59.121 17:41:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.121 17:41:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.121 17:41:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.121 17:41:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.121 17:41:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.121 17:41:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105441844 kB' 'MemAvailable: 108900300 kB' 'Buffers: 9968 kB' 'Cached: 14385508 kB' 'SwapCached: 0 kB' 'Active: 11314900 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867220 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463072 kB' 'Mapped: 184532 kB' 'Shmem: 10407420 kB' 'KReclaimable: 511356 kB' 'Slab: 1233028 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721672 kB' 'KernelStack: 25184 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12394992 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230380 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.121 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.121 17:41:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.122 17:41:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.122 17:41:02 -- setup/common.sh@33 -- # echo 0 00:03:59.122 17:41:02 -- setup/common.sh@33 -- # return 0 00:03:59.122 17:41:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:59.122 17:41:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.122 17:41:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.122 17:41:02 -- setup/common.sh@18 -- # local node= 00:03:59.122 17:41:02 -- setup/common.sh@19 -- # local var val 00:03:59.122 17:41:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.122 17:41:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.122 17:41:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.122 17:41:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.122 17:41:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.122 17:41:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.122 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105446080 kB' 'MemAvailable: 108904536 kB' 'Buffers: 9968 kB' 'Cached: 14385512 kB' 'SwapCached: 0 kB' 'Active: 11314476 kB' 'Inactive: 3540376 kB' 'Active(anon): 10866796 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462640 kB' 'Mapped: 184532 kB' 'Shmem: 10407424 kB' 'KReclaimable: 511356 kB' 'Slab: 1232980 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721624 kB' 'KernelStack: 24928 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12395176 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230332 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.123 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.123 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.124 17:41:02 -- setup/common.sh@33 -- # echo 0 00:03:59.124 17:41:02 -- setup/common.sh@33 -- # return 0 00:03:59.124 17:41:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:59.124 17:41:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.124 17:41:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.124 17:41:02 -- setup/common.sh@18 -- # local node= 00:03:59.124 17:41:02 -- setup/common.sh@19 -- # local var val 00:03:59.124 17:41:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.124 17:41:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.124 17:41:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.124 17:41:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.124 17:41:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.124 17:41:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105445576 kB' 'MemAvailable: 108904032 kB' 'Buffers: 9968 kB' 'Cached: 14385524 kB' 'SwapCached: 0 kB' 'Active: 11314220 kB' 'Inactive: 3540376 kB' 'Active(anon): 10866540 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462352 kB' 'Mapped: 184528 kB' 'Shmem: 10407436 kB' 'KReclaimable: 511356 kB' 'Slab: 1233068 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721712 kB' 'KernelStack: 24976 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12396660 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230364 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.124 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.124 17:41:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.125 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.125 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.126 17:41:02 -- setup/common.sh@33 -- # echo 0 00:03:59.126 17:41:02 -- setup/common.sh@33 -- # return 0 00:03:59.126 17:41:02 -- setup/hugepages.sh@100 -- # resv=0 00:03:59.126 17:41:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.126 nr_hugepages=1024 00:03:59.126 17:41:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.126 resv_hugepages=0 00:03:59.126 17:41:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.126 surplus_hugepages=0 00:03:59.126 17:41:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.126 anon_hugepages=0 00:03:59.126 17:41:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.126 17:41:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.126 17:41:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.126 17:41:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.126 17:41:02 -- setup/common.sh@18 -- # local node= 00:03:59.126 17:41:02 -- setup/common.sh@19 -- # local var val 00:03:59.126 17:41:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.126 17:41:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.126 17:41:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.126 17:41:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.126 17:41:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.126 17:41:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105445824 kB' 'MemAvailable: 108904280 kB' 'Buffers: 9968 kB' 'Cached: 14385548 kB' 'SwapCached: 0 kB' 'Active: 11314648 kB' 'Inactive: 3540376 kB' 'Active(anon): 10866968 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462536 kB' 'Mapped: 184528 kB' 'Shmem: 10407460 kB' 'KReclaimable: 511356 kB' 'Slab: 1233068 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721712 kB' 'KernelStack: 25248 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12396676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230444 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.126 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.126 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.127 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.127 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.128 17:41:02 -- setup/common.sh@33 -- # echo 1024 00:03:59.128 17:41:02 -- setup/common.sh@33 -- # return 0 00:03:59.128 17:41:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.128 17:41:02 -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.128 17:41:02 -- setup/hugepages.sh@27 -- # local node 00:03:59.128 17:41:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.128 17:41:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.128 17:41:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.128 17:41:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:59.128 17:41:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.128 17:41:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.128 17:41:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.128 17:41:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.128 17:41:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.128 17:41:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.128 17:41:02 -- setup/common.sh@18 -- # local node=0 00:03:59.128 17:41:02 -- setup/common.sh@19 -- # local var val 00:03:59.128 17:41:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.128 17:41:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.128 17:41:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.128 17:41:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.128 17:41:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.128 17:41:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59040976 kB' 'MemUsed: 6621024 kB' 'SwapCached: 0 kB' 'Active: 2459708 kB' 'Inactive: 288924 kB' 'Active(anon): 2087240 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2620236 kB' 'Mapped: 139796 kB' 'AnonPages: 131568 kB' 'Shmem: 1958844 kB' 'KernelStack: 13336 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 378960 kB' 'Slab: 740556 kB' 'SReclaimable: 378960 kB' 'SUnreclaim: 361596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.128 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.128 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # continue 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.129 17:41:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.129 17:41:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.129 17:41:02 -- setup/common.sh@33 -- # echo 0 00:03:59.129 17:41:02 -- setup/common.sh@33 -- # return 0 00:03:59.129 17:41:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.129 17:41:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.129 17:41:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.129 17:41:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.129 17:41:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:59.129 node0=1024 expecting 1024 00:03:59.129 17:41:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:59.129 17:41:02 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:59.129 17:41:02 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:59.129 17:41:02 -- setup/hugepages.sh@202 -- # setup output 00:03:59.129 17:41:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.129 17:41:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.342 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.342 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.342 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:03.342 17:41:06 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:03.342 17:41:06 -- setup/hugepages.sh@89 -- # local node 00:04:03.342 17:41:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.342 17:41:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.342 17:41:06 -- setup/hugepages.sh@92 -- # local surp 00:04:03.342 17:41:06 -- setup/hugepages.sh@93 -- # local resv 00:04:03.342 17:41:06 -- setup/hugepages.sh@94 -- # local anon 00:04:03.342 17:41:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.342 17:41:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.342 17:41:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.342 17:41:06 -- setup/common.sh@18 -- # local node= 00:04:03.342 17:41:06 -- setup/common.sh@19 -- # local var val 00:04:03.342 17:41:06 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.342 17:41:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.342 17:41:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.342 17:41:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.342 17:41:06 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.342 17:41:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105474584 kB' 'MemAvailable: 108933040 kB' 'Buffers: 9968 kB' 'Cached: 14385640 kB' 'SwapCached: 0 kB' 'Active: 11314936 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867256 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462948 kB' 'Mapped: 184532 kB' 'Shmem: 10407552 kB' 'KReclaimable: 511356 kB' 'Slab: 1233148 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721792 kB' 'KernelStack: 25072 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12392612 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230268 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.342 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.342 17:41:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:06 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.343 17:41:06 -- setup/common.sh@33 -- # echo 0 00:04:03.343 17:41:06 -- setup/common.sh@33 -- # return 0 00:04:03.343 17:41:06 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.343 17:41:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.343 17:41:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.343 17:41:07 -- setup/common.sh@18 -- # local node= 00:04:03.343 17:41:07 -- setup/common.sh@19 -- # local var val 00:04:03.343 17:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.343 17:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.343 17:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.343 17:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.343 17:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.343 17:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105476472 kB' 'MemAvailable: 108934928 kB' 'Buffers: 9968 kB' 'Cached: 14385644 kB' 'SwapCached: 0 kB' 'Active: 11315048 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867368 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463092 kB' 'Mapped: 184532 kB' 'Shmem: 10407556 kB' 'KReclaimable: 511356 kB' 'Slab: 1233124 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721768 kB' 'KernelStack: 25072 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12392624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230284 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:04:03.343 17:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.343 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.343 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.343 17:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.343 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.343 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.344 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.344 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.345 17:41:07 -- setup/common.sh@33 -- # echo 0 00:04:03.345 17:41:07 -- setup/common.sh@33 -- # return 0 00:04:03.345 17:41:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.345 17:41:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.345 17:41:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.345 17:41:07 -- setup/common.sh@18 -- # local node= 00:04:03.345 17:41:07 -- setup/common.sh@19 -- # local var val 00:04:03.345 17:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.345 17:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.345 17:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.345 17:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.345 17:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.345 17:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105476012 kB' 'MemAvailable: 108934468 kB' 'Buffers: 9968 kB' 'Cached: 14385656 kB' 'SwapCached: 0 kB' 'Active: 11315100 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867420 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463092 kB' 'Mapped: 184532 kB' 'Shmem: 10407568 kB' 'KReclaimable: 511356 kB' 'Slab: 1233124 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721768 kB' 'KernelStack: 25072 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12392640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230300 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.345 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.345 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.346 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.346 17:41:07 -- setup/common.sh@33 -- # echo 0 00:04:03.346 17:41:07 -- setup/common.sh@33 -- # return 0 00:04:03.346 17:41:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.346 17:41:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.346 nr_hugepages=1024 00:04:03.346 17:41:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.346 resv_hugepages=0 00:04:03.346 17:41:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.346 surplus_hugepages=0 00:04:03.346 17:41:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.346 anon_hugepages=0 00:04:03.346 17:41:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.346 17:41:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.346 17:41:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.346 17:41:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.346 17:41:07 -- setup/common.sh@18 -- # local node= 00:04:03.346 17:41:07 -- setup/common.sh@19 -- # local var val 00:04:03.346 17:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.346 17:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.346 17:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.346 17:41:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.346 17:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.346 17:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.346 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126343980 kB' 'MemFree: 105476612 kB' 'MemAvailable: 108935068 kB' 'Buffers: 9968 kB' 'Cached: 14385680 kB' 'SwapCached: 0 kB' 'Active: 11314720 kB' 'Inactive: 3540376 kB' 'Active(anon): 10867040 kB' 'Inactive(anon): 0 kB' 'Active(file): 447680 kB' 'Inactive(file): 3540376 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462660 kB' 'Mapped: 184544 kB' 'Shmem: 10407592 kB' 'KReclaimable: 511356 kB' 'Slab: 1233124 kB' 'SReclaimable: 511356 kB' 'SUnreclaim: 721768 kB' 'KernelStack: 24992 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70512016 kB' 'Committed_AS: 12392656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 230300 kB' 'VmallocChunk: 0 kB' 'Percpu: 113664 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3025188 kB' 'DirectMap2M: 21821440 kB' 'DirectMap1G: 111149056 kB' 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.347 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.347 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.348 17:41:07 -- setup/common.sh@33 -- # echo 1024 00:04:03.348 17:41:07 -- setup/common.sh@33 -- # return 0 00:04:03.348 17:41:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.348 17:41:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.348 17:41:07 -- setup/hugepages.sh@27 -- # local node 00:04:03.348 17:41:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.348 17:41:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.348 17:41:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.348 17:41:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.348 17:41:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.348 17:41:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.348 17:41:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.348 17:41:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.348 17:41:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.348 17:41:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.348 17:41:07 -- setup/common.sh@18 -- # local node=0 00:04:03.348 17:41:07 -- setup/common.sh@19 -- # local var val 00:04:03.348 17:41:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.348 17:41:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.348 17:41:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.348 17:41:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.348 17:41:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.348 17:41:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.348 17:41:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65662000 kB' 'MemFree: 59059812 kB' 'MemUsed: 6602188 kB' 'SwapCached: 0 kB' 'Active: 2460232 kB' 'Inactive: 288924 kB' 'Active(anon): 2087764 kB' 'Inactive(anon): 0 kB' 'Active(file): 372468 kB' 'Inactive(file): 288924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2620292 kB' 'Mapped: 139812 kB' 'AnonPages: 131980 kB' 'Shmem: 1958900 kB' 'KernelStack: 13224 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 378960 kB' 'Slab: 740632 kB' 'SReclaimable: 378960 kB' 'SUnreclaim: 361672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.348 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.348 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # continue 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.349 17:41:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.349 17:41:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.349 17:41:07 -- setup/common.sh@33 -- # echo 0 00:04:03.349 17:41:07 -- setup/common.sh@33 -- # return 0 00:04:03.349 17:41:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.349 17:41:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.349 17:41:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.349 17:41:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.349 17:41:07 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.349 node0=1024 expecting 1024 00:04:03.349 17:41:07 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.349 00:04:03.349 real 0m8.423s 00:04:03.349 user 0m3.304s 00:04:03.349 sys 0m5.265s 00:04:03.349 17:41:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.349 17:41:07 -- common/autotest_common.sh@10 -- # set +x 00:04:03.349 ************************************ 00:04:03.349 END TEST no_shrink_alloc 00:04:03.349 ************************************ 00:04:03.349 17:41:07 -- setup/hugepages.sh@217 -- # clear_hp 00:04:03.349 17:41:07 -- setup/hugepages.sh@37 -- # local node hp 00:04:03.349 17:41:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.349 17:41:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.349 17:41:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.349 17:41:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.349 17:41:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.349 17:41:07 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:03.349 17:41:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.349 17:41:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.349 17:41:07 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:03.349 17:41:07 -- setup/hugepages.sh@41 -- # echo 0 00:04:03.349 17:41:07 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:03.349 17:41:07 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:03.349 00:04:03.349 real 0m31.531s 00:04:03.349 user 0m11.538s 00:04:03.349 sys 0m18.643s 00:04:03.349 17:41:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.349 17:41:07 -- common/autotest_common.sh@10 -- # set +x 00:04:03.349 ************************************ 00:04:03.349 END TEST hugepages 00:04:03.349 ************************************ 00:04:03.349 17:41:07 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:03.349 17:41:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.349 17:41:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.349 17:41:07 -- common/autotest_common.sh@10 -- # set +x 00:04:03.349 ************************************ 00:04:03.349 START TEST driver 00:04:03.349 ************************************ 00:04:03.349 17:41:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:03.349 * Looking for test storage... 00:04:03.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:03.349 17:41:07 -- setup/driver.sh@68 -- # setup reset 00:04:03.349 17:41:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.349 17:41:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.636 17:41:12 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.636 17:41:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.636 17:41:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.636 17:41:12 -- common/autotest_common.sh@10 -- # set +x 00:04:08.636 ************************************ 00:04:08.636 START TEST guess_driver 00:04:08.636 ************************************ 00:04:08.636 17:41:12 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:08.636 17:41:12 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.636 17:41:12 -- setup/driver.sh@47 -- # local fail=0 00:04:08.636 17:41:12 -- setup/driver.sh@49 -- # pick_driver 00:04:08.636 17:41:12 -- setup/driver.sh@36 -- # vfio 00:04:08.636 17:41:12 -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.636 17:41:12 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.636 17:41:12 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.636 17:41:12 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:08.636 17:41:12 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.636 17:41:12 -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:08.636 17:41:12 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:08.636 17:41:12 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:08.636 17:41:12 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:08.636 17:41:12 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:08.636 17:41:12 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:08.636 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:08.636 17:41:12 -- setup/driver.sh@30 -- # return 0 00:04:08.636 17:41:12 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:08.636 17:41:12 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:08.636 17:41:12 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.636 17:41:12 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:08.636 Looking for driver=vfio-pci 00:04:08.636 17:41:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.636 17:41:12 -- setup/driver.sh@45 -- # setup output config 00:04:08.636 17:41:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.636 17:41:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.841 17:41:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.841 17:41:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.841 17:41:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.756 17:41:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.756 17:41:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.756 17:41:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.756 17:41:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.756 17:41:18 -- setup/driver.sh@65 -- # setup reset 00:04:14.756 17:41:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.756 17:41:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.043 00:04:20.043 real 0m11.358s 00:04:20.043 user 0m3.164s 00:04:20.043 sys 0m5.560s 00:04:20.043 17:41:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.043 17:41:23 -- common/autotest_common.sh@10 -- # set +x 00:04:20.043 ************************************ 00:04:20.043 END TEST guess_driver 00:04:20.043 ************************************ 00:04:20.043 00:04:20.043 real 0m16.789s 00:04:20.043 user 0m4.738s 00:04:20.043 sys 0m8.604s 00:04:20.043 17:41:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.043 17:41:24 -- common/autotest_common.sh@10 -- # set +x 00:04:20.043 ************************************ 00:04:20.043 END TEST driver 00:04:20.044 ************************************ 00:04:20.044 17:41:24 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:20.044 17:41:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.044 17:41:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.044 17:41:24 -- common/autotest_common.sh@10 -- # set +x 00:04:20.044 ************************************ 00:04:20.044 START TEST devices 00:04:20.044 ************************************ 00:04:20.044 17:41:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:20.044 * Looking for test storage... 00:04:20.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:20.044 17:41:24 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.044 17:41:24 -- setup/devices.sh@192 -- # setup reset 00:04:20.044 17:41:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.044 17:41:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.334 17:41:28 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.334 17:41:28 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:25.334 17:41:28 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:25.334 17:41:28 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:25.334 17:41:28 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:25.334 17:41:28 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:25.334 17:41:28 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:25.334 17:41:28 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.334 17:41:28 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:25.334 17:41:28 -- setup/devices.sh@196 -- # blocks=() 00:04:25.334 17:41:28 -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.334 17:41:28 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.334 17:41:28 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.334 17:41:28 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.334 17:41:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.334 17:41:28 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.334 17:41:28 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.334 17:41:28 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:25.334 17:41:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:25.334 17:41:28 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.334 17:41:28 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:25.334 17:41:28 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.334 No valid GPT data, bailing 00:04:25.334 17:41:28 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.334 17:41:28 -- scripts/common.sh@393 -- # pt= 00:04:25.334 17:41:28 -- scripts/common.sh@394 -- # return 1 00:04:25.334 17:41:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.334 17:41:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.334 17:41:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.334 17:41:28 -- setup/common.sh@80 -- # echo 2000398934016 00:04:25.334 17:41:28 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:25.334 17:41:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.334 17:41:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:25.334 17:41:28 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:25.334 17:41:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.334 17:41:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.334 17:41:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.334 17:41:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.334 17:41:28 -- common/autotest_common.sh@10 -- # set +x 00:04:25.334 ************************************ 00:04:25.334 START TEST nvme_mount 00:04:25.334 ************************************ 00:04:25.334 17:41:28 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:25.334 17:41:28 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.334 17:41:28 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.334 17:41:28 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.334 17:41:28 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.334 17:41:28 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.334 17:41:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.334 17:41:28 -- setup/common.sh@40 -- # local part_no=1 00:04:25.334 17:41:28 -- setup/common.sh@41 -- # local size=1073741824 00:04:25.334 17:41:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.334 17:41:28 -- setup/common.sh@44 -- # parts=() 00:04:25.334 17:41:28 -- setup/common.sh@44 -- # local parts 00:04:25.334 17:41:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.334 17:41:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.334 17:41:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.334 17:41:28 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.334 17:41:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.334 17:41:28 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.334 17:41:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.334 17:41:28 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.669 Creating new GPT entries in memory. 00:04:25.669 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.669 other utilities. 00:04:25.669 17:41:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.669 17:41:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.669 17:41:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.669 17:41:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.669 17:41:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:26.629 Creating new GPT entries in memory. 00:04:26.629 The operation has completed successfully. 00:04:26.629 17:41:30 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.629 17:41:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.629 17:41:30 -- setup/common.sh@62 -- # wait 1459809 00:04:26.629 17:41:30 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.629 17:41:30 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:26.629 17:41:30 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.629 17:41:30 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.629 17:41:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.629 17:41:30 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.629 17:41:30 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.629 17:41:30 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:26.629 17:41:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.629 17:41:30 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.629 17:41:30 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.629 17:41:30 -- setup/devices.sh@53 -- # local found=0 00:04:26.630 17:41:30 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.630 17:41:30 -- setup/devices.sh@56 -- # : 00:04:26.630 17:41:30 -- setup/devices.sh@59 -- # local pci status 00:04:26.630 17:41:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.630 17:41:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:26.630 17:41:30 -- setup/devices.sh@47 -- # setup output config 00:04:26.630 17:41:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.630 17:41:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.863 17:41:34 -- setup/devices.sh@63 -- # found=1 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.863 17:41:34 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.863 17:41:34 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.863 17:41:34 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.863 17:41:34 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.863 17:41:34 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.863 17:41:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.863 17:41:34 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.863 17:41:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.863 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.863 17:41:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.863 17:41:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.863 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.863 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.863 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.863 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.863 17:41:35 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.863 17:41:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.863 17:41:35 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.863 17:41:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.863 17:41:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.863 17:41:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.863 17:41:35 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.863 17:41:35 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:30.863 17:41:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.863 17:41:35 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.863 17:41:35 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.863 17:41:35 -- setup/devices.sh@53 -- # local found=0 00:04:30.863 17:41:35 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.863 17:41:35 -- setup/devices.sh@56 -- # : 00:04:30.863 17:41:35 -- setup/devices.sh@59 -- # local pci status 00:04:30.863 17:41:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.863 17:41:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:30.863 17:41:35 -- setup/devices.sh@47 -- # setup output config 00:04:30.864 17:41:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.864 17:41:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:35.073 17:41:38 -- setup/devices.sh@63 -- # found=1 00:04:35.073 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.073 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.073 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.073 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.073 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.073 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.074 17:41:38 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:35.074 17:41:38 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.074 17:41:38 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.074 17:41:38 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.074 17:41:38 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.074 17:41:38 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:35.074 17:41:38 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.074 17:41:38 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:35.074 17:41:38 -- setup/devices.sh@50 -- # local mount_point= 00:04:35.074 17:41:38 -- setup/devices.sh@51 -- # local test_file= 00:04:35.074 17:41:38 -- setup/devices.sh@53 -- # local found=0 00:04:35.074 17:41:38 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.074 17:41:38 -- setup/devices.sh@59 -- # local pci status 00:04:35.074 17:41:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.074 17:41:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.074 17:41:38 -- setup/devices.sh@47 -- # setup output config 00:04:35.074 17:41:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.074 17:41:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:39.282 17:41:42 -- setup/devices.sh@63 -- # found=1 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.282 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.282 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:42 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.283 17:41:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.283 17:41:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.283 17:41:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.283 17:41:43 -- setup/devices.sh@68 -- # return 0 00:04:39.283 17:41:43 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:39.283 17:41:43 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.283 17:41:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.283 17:41:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.283 17:41:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.283 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.283 00:04:39.283 real 0m14.450s 00:04:39.283 user 0m4.452s 00:04:39.283 sys 0m7.904s 00:04:39.283 17:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.283 17:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:39.283 ************************************ 00:04:39.283 END TEST nvme_mount 00:04:39.283 ************************************ 00:04:39.283 17:41:43 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:39.283 17:41:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.283 17:41:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.283 17:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:39.283 ************************************ 00:04:39.283 START TEST dm_mount 00:04:39.283 ************************************ 00:04:39.283 17:41:43 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:39.283 17:41:43 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:39.283 17:41:43 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:39.283 17:41:43 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:39.283 17:41:43 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:39.283 17:41:43 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.283 17:41:43 -- setup/common.sh@40 -- # local part_no=2 00:04:39.283 17:41:43 -- setup/common.sh@41 -- # local size=1073741824 00:04:39.283 17:41:43 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.283 17:41:43 -- setup/common.sh@44 -- # parts=() 00:04:39.283 17:41:43 -- setup/common.sh@44 -- # local parts 00:04:39.283 17:41:43 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.283 17:41:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.283 17:41:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.283 17:41:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:39.283 17:41:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.283 17:41:43 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.283 17:41:43 -- setup/common.sh@46 -- # (( part++ )) 00:04:39.283 17:41:43 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.283 17:41:43 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.283 17:41:43 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.283 17:41:43 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:40.225 Creating new GPT entries in memory. 00:04:40.225 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.225 other utilities. 00:04:40.225 17:41:44 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.225 17:41:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.225 17:41:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.225 17:41:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.225 17:41:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:41.167 Creating new GPT entries in memory. 00:04:41.167 The operation has completed successfully. 00:04:41.167 17:41:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:41.167 17:41:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.167 17:41:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.167 17:41:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.167 17:41:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:42.108 The operation has completed successfully. 00:04:42.108 17:41:46 -- setup/common.sh@57 -- # (( part++ )) 00:04:42.108 17:41:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.108 17:41:46 -- setup/common.sh@62 -- # wait 1464947 00:04:42.108 17:41:46 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:42.108 17:41:46 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.108 17:41:46 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.108 17:41:46 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:42.108 17:41:46 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:42.108 17:41:46 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.108 17:41:46 -- setup/devices.sh@161 -- # break 00:04:42.108 17:41:46 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.108 17:41:46 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:42.108 17:41:46 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:42.108 17:41:46 -- setup/devices.sh@166 -- # dm=dm-0 00:04:42.108 17:41:46 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:42.108 17:41:46 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:42.108 17:41:46 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.108 17:41:46 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:42.108 17:41:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.108 17:41:46 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:42.108 17:41:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:42.108 17:41:46 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.108 17:41:46 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.108 17:41:46 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:42.108 17:41:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:42.108 17:41:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.108 17:41:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:42.108 17:41:46 -- setup/devices.sh@53 -- # local found=0 00:04:42.108 17:41:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:42.108 17:41:46 -- setup/devices.sh@56 -- # : 00:04:42.108 17:41:46 -- setup/devices.sh@59 -- # local pci status 00:04:42.108 17:41:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.108 17:41:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:42.108 17:41:46 -- setup/devices.sh@47 -- # setup output config 00:04:42.108 17:41:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.109 17:41:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.317 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.317 17:41:50 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.317 17:41:50 -- setup/devices.sh@63 -- # found=1 00:04:46.317 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.317 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.317 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.317 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.317 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.318 17:41:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.318 17:41:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.318 17:41:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.318 17:41:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.318 17:41:50 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.318 17:41:50 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.318 17:41:50 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:46.318 17:41:50 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.318 17:41:50 -- setup/devices.sh@50 -- # local mount_point= 00:04:46.318 17:41:50 -- setup/devices.sh@51 -- # local test_file= 00:04:46.318 17:41:50 -- setup/devices.sh@53 -- # local found=0 00:04:46.318 17:41:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.318 17:41:50 -- setup/devices.sh@59 -- # local pci status 00:04:46.318 17:41:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.318 17:41:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:46.318 17:41:50 -- setup/devices.sh@47 -- # setup output config 00:04:46.318 17:41:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.318 17:41:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.529 17:41:54 -- setup/devices.sh@63 -- # found=1 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.529 17:41:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.529 17:41:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.529 17:41:54 -- setup/devices.sh@68 -- # return 0 00:04:50.529 17:41:54 -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.529 17:41:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.529 17:41:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.529 17:41:54 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.529 17:41:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.529 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.529 17:41:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.529 00:04:50.529 real 0m11.261s 00:04:50.529 user 0m3.026s 00:04:50.529 sys 0m5.323s 00:04:50.529 17:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.529 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.529 ************************************ 00:04:50.529 END TEST dm_mount 00:04:50.529 ************************************ 00:04:50.529 17:41:54 -- setup/devices.sh@1 -- # cleanup 00:04:50.529 17:41:54 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.529 17:41:54 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.529 17:41:54 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.529 17:41:54 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.529 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:50.529 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:50.529 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:50.529 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:50.529 17:41:54 -- setup/devices.sh@12 -- # cleanup_dm 00:04:50.529 17:41:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.529 17:41:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.529 17:41:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.529 17:41:54 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:50.529 00:04:50.529 real 0m30.649s 00:04:50.529 user 0m9.285s 00:04:50.529 sys 0m16.254s 00:04:50.529 17:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.529 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.529 ************************************ 00:04:50.529 END TEST devices 00:04:50.529 ************************************ 00:04:50.529 00:04:50.529 real 1m47.592s 00:04:50.529 user 0m34.975s 00:04:50.530 sys 1m0.460s 00:04:50.530 17:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.530 17:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:50.530 ************************************ 00:04:50.530 END TEST setup.sh 00:04:50.530 ************************************ 00:04:50.791 17:41:54 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:55.081 Hugepages 00:04:55.081 node hugesize free / total 00:04:55.081 node0 1048576kB 0 / 0 00:04:55.081 node0 2048kB 2048 / 2048 00:04:55.081 node1 1048576kB 0 / 0 00:04:55.081 node1 2048kB 0 / 0 00:04:55.081 00:04:55.081 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.081 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:55.081 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:55.081 NVMe 0000:65:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:55.081 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:55.081 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:55.081 17:41:58 -- spdk/autotest.sh@141 -- # uname -s 00:04:55.081 17:41:58 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:55.081 17:41:58 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:55.081 17:41:58 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.283 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:59.283 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:00.664 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:05:00.664 17:42:04 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:01.605 17:42:05 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:01.605 17:42:05 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:01.605 17:42:05 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.605 17:42:05 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:01.605 17:42:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:01.605 17:42:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:01.605 17:42:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.605 17:42:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:01.605 17:42:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:01.866 17:42:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:01.866 17:42:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:01.866 17:42:05 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.069 Waiting for block devices as requested 00:05:06.069 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:06.069 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:06.069 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:06.069 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:06.069 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:06.069 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:06.331 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:06.331 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:06.331 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:05:06.593 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:06.593 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:06.593 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:06.855 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:06.855 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:06.855 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:07.116 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:07.116 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:07.116 17:42:11 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:07.116 17:42:11 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:07.116 17:42:11 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:07.116 17:42:11 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:07.116 17:42:11 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:07.116 17:42:11 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:07.116 17:42:11 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:07.116 17:42:11 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:07.116 17:42:11 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:07.116 17:42:11 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:07.117 17:42:11 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:07.117 17:42:11 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:07.117 17:42:11 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:07.117 17:42:11 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:07.117 17:42:11 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:07.117 17:42:11 -- common/autotest_common.sh@1542 -- # continue 00:05:07.117 17:42:11 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:07.117 17:42:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:07.117 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 17:42:11 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:07.378 17:42:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:07.378 17:42:11 -- common/autotest_common.sh@10 -- # set +x 00:05:07.378 17:42:11 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.585 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:11.585 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:13.499 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.499 17:42:17 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:13.499 17:42:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:13.499 17:42:17 -- common/autotest_common.sh@10 -- # set +x 00:05:13.499 17:42:17 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:13.499 17:42:17 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:13.499 17:42:17 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.499 17:42:17 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:13.499 17:42:17 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:13.499 17:42:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.499 17:42:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.499 17:42:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.499 17:42:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.499 17:42:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.499 17:42:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.499 17:42:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:13.499 17:42:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:13.499 17:42:17 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:13.499 17:42:17 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:13.499 17:42:17 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:13.499 17:42:17 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:13.499 17:42:17 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:13.499 17:42:17 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:65:00.0 00:05:13.499 17:42:17 -- common/autotest_common.sh@1577 -- # [[ -z 0000:65:00.0 ]] 00:05:13.499 17:42:17 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1476874 00:05:13.499 17:42:17 -- common/autotest_common.sh@1583 -- # waitforlisten 1476874 00:05:13.499 17:42:17 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.499 17:42:17 -- common/autotest_common.sh@819 -- # '[' -z 1476874 ']' 00:05:13.499 17:42:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.499 17:42:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.499 17:42:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.499 17:42:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.499 17:42:17 -- common/autotest_common.sh@10 -- # set +x 00:05:13.499 [2024-07-22 17:42:17.618782] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:13.499 [2024-07-22 17:42:17.618839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476874 ] 00:05:13.499 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.499 [2024-07-22 17:42:17.702108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.759 [2024-07-22 17:42:17.792956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.759 [2024-07-22 17:42:17.793129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.331 17:42:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.331 17:42:18 -- common/autotest_common.sh@852 -- # return 0 00:05:14.332 17:42:18 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:14.332 17:42:18 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:14.332 17:42:18 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:65:00.0 00:05:17.631 nvme0n1 00:05:17.631 17:42:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:17.631 [2024-07-22 17:42:21.686878] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:17.631 request: 00:05:17.631 { 00:05:17.631 "nvme_ctrlr_name": "nvme0", 00:05:17.631 "password": "test", 00:05:17.631 "method": "bdev_nvme_opal_revert", 00:05:17.631 "req_id": 1 00:05:17.631 } 00:05:17.631 Got JSON-RPC error response 00:05:17.631 response: 00:05:17.631 { 00:05:17.631 "code": -32602, 00:05:17.631 "message": "Invalid parameters" 00:05:17.631 } 00:05:17.631 17:42:21 -- common/autotest_common.sh@1589 -- # true 00:05:17.632 17:42:21 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:17.632 17:42:21 -- common/autotest_common.sh@1593 -- # killprocess 1476874 00:05:17.632 17:42:21 -- common/autotest_common.sh@926 -- # '[' -z 1476874 ']' 00:05:17.632 17:42:21 -- common/autotest_common.sh@930 -- # kill -0 1476874 00:05:17.632 17:42:21 -- common/autotest_common.sh@931 -- # uname 00:05:17.632 17:42:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:17.632 17:42:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1476874 00:05:17.632 17:42:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:17.632 17:42:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:17.632 17:42:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1476874' 00:05:17.632 killing process with pid 1476874 00:05:17.632 17:42:21 -- common/autotest_common.sh@945 -- # kill 1476874 00:05:17.632 17:42:21 -- common/autotest_common.sh@950 -- # wait 1476874 00:05:20.175 17:42:24 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:20.175 17:42:24 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:20.175 17:42:24 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:20.175 17:42:24 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:20.175 17:42:24 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:20.175 17:42:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:20.175 17:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:20.175 17:42:24 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:20.175 17:42:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.175 17:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.175 17:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:20.175 ************************************ 00:05:20.175 START TEST env 00:05:20.175 ************************************ 00:05:20.175 17:42:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:20.175 * Looking for test storage... 00:05:20.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:20.175 17:42:24 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.175 17:42:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.175 17:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.175 17:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:20.175 ************************************ 00:05:20.175 START TEST env_memory 00:05:20.175 ************************************ 00:05:20.175 17:42:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.175 00:05:20.175 00:05:20.175 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.175 http://cunit.sourceforge.net/ 00:05:20.175 00:05:20.175 00:05:20.175 Suite: memory 00:05:20.175 Test: alloc and free memory map ...[2024-07-22 17:42:24.368908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.175 passed 00:05:20.175 Test: mem map translation ...[2024-07-22 17:42:24.392601] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.175 [2024-07-22 17:42:24.392629] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.175 [2024-07-22 17:42:24.392674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.175 [2024-07-22 17:42:24.392681] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.175 passed 00:05:20.175 Test: mem map registration ...[2024-07-22 17:42:24.443757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:20.175 [2024-07-22 17:42:24.443777] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:20.437 passed 00:05:20.437 Test: mem map adjacent registrations ...passed 00:05:20.437 00:05:20.437 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.437 suites 1 1 n/a 0 0 00:05:20.437 tests 4 4 4 0 0 00:05:20.437 asserts 152 152 152 0 n/a 00:05:20.437 00:05:20.437 Elapsed time = 0.181 seconds 00:05:20.437 00:05:20.437 real 0m0.194s 00:05:20.437 user 0m0.185s 00:05:20.437 sys 0m0.008s 00:05:20.437 17:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.437 17:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:20.437 ************************************ 00:05:20.437 END TEST env_memory 00:05:20.437 ************************************ 00:05:20.437 17:42:24 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.437 17:42:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.437 17:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.437 17:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:20.437 ************************************ 00:05:20.437 START TEST env_vtophys 00:05:20.437 ************************************ 00:05:20.437 17:42:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.437 EAL: lib.eal log level changed from notice to debug 00:05:20.437 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.437 EAL: Detected lcore 1 as core 1 on socket 0 00:05:20.437 EAL: Detected lcore 2 as core 2 on socket 0 00:05:20.437 EAL: Detected lcore 3 as core 3 on socket 0 00:05:20.437 EAL: Detected lcore 4 as core 4 on socket 0 00:05:20.437 EAL: Detected lcore 5 as core 5 on socket 0 00:05:20.437 EAL: Detected lcore 6 as core 6 on socket 0 00:05:20.437 EAL: Detected lcore 7 as core 7 on socket 0 00:05:20.437 EAL: Detected lcore 8 as core 8 on socket 0 00:05:20.437 EAL: Detected lcore 9 as core 9 on socket 0 00:05:20.437 EAL: Detected lcore 10 as core 10 on socket 0 00:05:20.437 EAL: Detected lcore 11 as core 11 on socket 0 00:05:20.437 EAL: Detected lcore 12 as core 12 on socket 0 00:05:20.437 EAL: Detected lcore 13 as core 13 on socket 0 00:05:20.437 EAL: Detected lcore 14 as core 14 on socket 0 00:05:20.437 EAL: Detected lcore 15 as core 15 on socket 0 00:05:20.437 EAL: Detected lcore 16 as core 16 on socket 0 00:05:20.437 EAL: Detected lcore 17 as core 17 on socket 0 00:05:20.437 EAL: Detected lcore 18 as core 18 on socket 0 00:05:20.437 EAL: Detected lcore 19 as core 19 on socket 0 00:05:20.437 EAL: Detected lcore 20 as core 20 on socket 0 00:05:20.437 EAL: Detected lcore 21 as core 21 on socket 0 00:05:20.437 EAL: Detected lcore 22 as core 22 on socket 0 00:05:20.437 EAL: Detected lcore 23 as core 23 on socket 0 00:05:20.437 EAL: Detected lcore 24 as core 24 on socket 0 00:05:20.437 EAL: Detected lcore 25 as core 25 on socket 0 00:05:20.437 EAL: Detected lcore 26 as core 26 on socket 0 00:05:20.437 EAL: Detected lcore 27 as core 27 on socket 0 00:05:20.437 EAL: Detected lcore 28 as core 28 on socket 0 00:05:20.437 EAL: Detected lcore 29 as core 29 on socket 0 00:05:20.437 EAL: Detected lcore 30 as core 30 on socket 0 00:05:20.437 EAL: Detected lcore 31 as core 31 on socket 0 00:05:20.437 EAL: Detected lcore 32 as core 0 on socket 1 00:05:20.437 EAL: Detected lcore 33 as core 1 on socket 1 00:05:20.437 EAL: Detected lcore 34 as core 2 on socket 1 00:05:20.437 EAL: Detected lcore 35 as core 3 on socket 1 00:05:20.437 EAL: Detected lcore 36 as core 4 on socket 1 00:05:20.437 EAL: Detected lcore 37 as core 5 on socket 1 00:05:20.437 EAL: Detected lcore 38 as core 6 on socket 1 00:05:20.437 EAL: Detected lcore 39 as core 7 on socket 1 00:05:20.437 EAL: Detected lcore 40 as core 8 on socket 1 00:05:20.437 EAL: Detected lcore 41 as core 9 on socket 1 00:05:20.437 EAL: Detected lcore 42 as core 10 on socket 1 00:05:20.437 EAL: Detected lcore 43 as core 11 on socket 1 00:05:20.437 EAL: Detected lcore 44 as core 12 on socket 1 00:05:20.437 EAL: Detected lcore 45 as core 13 on socket 1 00:05:20.437 EAL: Detected lcore 46 as core 14 on socket 1 00:05:20.437 EAL: Detected lcore 47 as core 15 on socket 1 00:05:20.437 EAL: Detected lcore 48 as core 16 on socket 1 00:05:20.437 EAL: Detected lcore 49 as core 17 on socket 1 00:05:20.437 EAL: Detected lcore 50 as core 18 on socket 1 00:05:20.437 EAL: Detected lcore 51 as core 19 on socket 1 00:05:20.437 EAL: Detected lcore 52 as core 20 on socket 1 00:05:20.437 EAL: Detected lcore 53 as core 21 on socket 1 00:05:20.437 EAL: Detected lcore 54 as core 22 on socket 1 00:05:20.437 EAL: Detected lcore 55 as core 23 on socket 1 00:05:20.437 EAL: Detected lcore 56 as core 24 on socket 1 00:05:20.437 EAL: Detected lcore 57 as core 25 on socket 1 00:05:20.437 EAL: Detected lcore 58 as core 26 on socket 1 00:05:20.437 EAL: Detected lcore 59 as core 27 on socket 1 00:05:20.437 EAL: Detected lcore 60 as core 28 on socket 1 00:05:20.437 EAL: Detected lcore 61 as core 29 on socket 1 00:05:20.438 EAL: Detected lcore 62 as core 30 on socket 1 00:05:20.438 EAL: Detected lcore 63 as core 31 on socket 1 00:05:20.438 EAL: Detected lcore 64 as core 0 on socket 0 00:05:20.438 EAL: Detected lcore 65 as core 1 on socket 0 00:05:20.438 EAL: Detected lcore 66 as core 2 on socket 0 00:05:20.438 EAL: Detected lcore 67 as core 3 on socket 0 00:05:20.438 EAL: Detected lcore 68 as core 4 on socket 0 00:05:20.438 EAL: Detected lcore 69 as core 5 on socket 0 00:05:20.438 EAL: Detected lcore 70 as core 6 on socket 0 00:05:20.438 EAL: Detected lcore 71 as core 7 on socket 0 00:05:20.438 EAL: Detected lcore 72 as core 8 on socket 0 00:05:20.438 EAL: Detected lcore 73 as core 9 on socket 0 00:05:20.438 EAL: Detected lcore 74 as core 10 on socket 0 00:05:20.438 EAL: Detected lcore 75 as core 11 on socket 0 00:05:20.438 EAL: Detected lcore 76 as core 12 on socket 0 00:05:20.438 EAL: Detected lcore 77 as core 13 on socket 0 00:05:20.438 EAL: Detected lcore 78 as core 14 on socket 0 00:05:20.438 EAL: Detected lcore 79 as core 15 on socket 0 00:05:20.438 EAL: Detected lcore 80 as core 16 on socket 0 00:05:20.438 EAL: Detected lcore 81 as core 17 on socket 0 00:05:20.438 EAL: Detected lcore 82 as core 18 on socket 0 00:05:20.438 EAL: Detected lcore 83 as core 19 on socket 0 00:05:20.438 EAL: Detected lcore 84 as core 20 on socket 0 00:05:20.438 EAL: Detected lcore 85 as core 21 on socket 0 00:05:20.438 EAL: Detected lcore 86 as core 22 on socket 0 00:05:20.438 EAL: Detected lcore 87 as core 23 on socket 0 00:05:20.438 EAL: Detected lcore 88 as core 24 on socket 0 00:05:20.438 EAL: Detected lcore 89 as core 25 on socket 0 00:05:20.438 EAL: Detected lcore 90 as core 26 on socket 0 00:05:20.438 EAL: Detected lcore 91 as core 27 on socket 0 00:05:20.438 EAL: Detected lcore 92 as core 28 on socket 0 00:05:20.438 EAL: Detected lcore 93 as core 29 on socket 0 00:05:20.438 EAL: Detected lcore 94 as core 30 on socket 0 00:05:20.438 EAL: Detected lcore 95 as core 31 on socket 0 00:05:20.438 EAL: Detected lcore 96 as core 0 on socket 1 00:05:20.438 EAL: Detected lcore 97 as core 1 on socket 1 00:05:20.438 EAL: Detected lcore 98 as core 2 on socket 1 00:05:20.438 EAL: Detected lcore 99 as core 3 on socket 1 00:05:20.438 EAL: Detected lcore 100 as core 4 on socket 1 00:05:20.438 EAL: Detected lcore 101 as core 5 on socket 1 00:05:20.438 EAL: Detected lcore 102 as core 6 on socket 1 00:05:20.438 EAL: Detected lcore 103 as core 7 on socket 1 00:05:20.438 EAL: Detected lcore 104 as core 8 on socket 1 00:05:20.438 EAL: Detected lcore 105 as core 9 on socket 1 00:05:20.438 EAL: Detected lcore 106 as core 10 on socket 1 00:05:20.438 EAL: Detected lcore 107 as core 11 on socket 1 00:05:20.438 EAL: Detected lcore 108 as core 12 on socket 1 00:05:20.438 EAL: Detected lcore 109 as core 13 on socket 1 00:05:20.438 EAL: Detected lcore 110 as core 14 on socket 1 00:05:20.438 EAL: Detected lcore 111 as core 15 on socket 1 00:05:20.438 EAL: Detected lcore 112 as core 16 on socket 1 00:05:20.438 EAL: Detected lcore 113 as core 17 on socket 1 00:05:20.438 EAL: Detected lcore 114 as core 18 on socket 1 00:05:20.438 EAL: Detected lcore 115 as core 19 on socket 1 00:05:20.438 EAL: Detected lcore 116 as core 20 on socket 1 00:05:20.438 EAL: Detected lcore 117 as core 21 on socket 1 00:05:20.438 EAL: Detected lcore 118 as core 22 on socket 1 00:05:20.438 EAL: Detected lcore 119 as core 23 on socket 1 00:05:20.438 EAL: Detected lcore 120 as core 24 on socket 1 00:05:20.438 EAL: Detected lcore 121 as core 25 on socket 1 00:05:20.438 EAL: Detected lcore 122 as core 26 on socket 1 00:05:20.438 EAL: Detected lcore 123 as core 27 on socket 1 00:05:20.438 EAL: Detected lcore 124 as core 28 on socket 1 00:05:20.438 EAL: Detected lcore 125 as core 29 on socket 1 00:05:20.438 EAL: Detected lcore 126 as core 30 on socket 1 00:05:20.438 EAL: Detected lcore 127 as core 31 on socket 1 00:05:20.438 EAL: Maximum logical cores by configuration: 128 00:05:20.438 EAL: Detected CPU lcores: 128 00:05:20.438 EAL: Detected NUMA nodes: 2 00:05:20.438 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:20.438 EAL: Detected shared linkage of DPDK 00:05:20.438 EAL: No shared files mode enabled, IPC will be disabled 00:05:20.438 EAL: Bus pci wants IOVA as 'DC' 00:05:20.438 EAL: Buses did not request a specific IOVA mode. 00:05:20.438 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:20.438 EAL: Selected IOVA mode 'VA' 00:05:20.438 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.438 EAL: Probing VFIO support... 00:05:20.438 EAL: IOMMU type 1 (Type 1) is supported 00:05:20.438 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:20.438 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:20.438 EAL: VFIO support initialized 00:05:20.438 EAL: Ask a virtual area of 0x2e000 bytes 00:05:20.438 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:20.438 EAL: Setting up physically contiguous memory... 00:05:20.438 EAL: Setting maximum number of open files to 524288 00:05:20.438 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:20.438 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:20.438 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:20.438 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:20.438 EAL: Ask a virtual area of 0x61000 bytes 00:05:20.438 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:20.438 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:20.438 EAL: Ask a virtual area of 0x400000000 bytes 00:05:20.438 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:20.438 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:20.438 EAL: Hugepages will be freed exactly as allocated. 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: TSC frequency is ~2600000 KHz 00:05:20.438 EAL: Main lcore 0 is ready (tid=7f6beee10a00;cpuset=[0]) 00:05:20.438 EAL: Trying to obtain current memory policy. 00:05:20.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.438 EAL: Restoring previous memory policy: 0 00:05:20.438 EAL: request: mp_malloc_sync 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: Heap on socket 0 was expanded by 2MB 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:20.438 EAL: Mem event callback 'spdk:(nil)' registered 00:05:20.438 00:05:20.438 00:05:20.438 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.438 http://cunit.sourceforge.net/ 00:05:20.438 00:05:20.438 00:05:20.438 Suite: components_suite 00:05:20.438 Test: vtophys_malloc_test ...passed 00:05:20.438 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:20.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.438 EAL: Restoring previous memory policy: 4 00:05:20.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.438 EAL: request: mp_malloc_sync 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: Heap on socket 0 was expanded by 4MB 00:05:20.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.438 EAL: request: mp_malloc_sync 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: Heap on socket 0 was shrunk by 4MB 00:05:20.438 EAL: Trying to obtain current memory policy. 00:05:20.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.438 EAL: Restoring previous memory policy: 4 00:05:20.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.438 EAL: request: mp_malloc_sync 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: Heap on socket 0 was expanded by 6MB 00:05:20.438 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.438 EAL: request: mp_malloc_sync 00:05:20.438 EAL: No shared files mode enabled, IPC is disabled 00:05:20.438 EAL: Heap on socket 0 was shrunk by 6MB 00:05:20.438 EAL: Trying to obtain current memory policy. 00:05:20.438 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.438 EAL: Restoring previous memory policy: 4 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was expanded by 10MB 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was shrunk by 10MB 00:05:20.439 EAL: Trying to obtain current memory policy. 00:05:20.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.439 EAL: Restoring previous memory policy: 4 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was expanded by 18MB 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was shrunk by 18MB 00:05:20.439 EAL: Trying to obtain current memory policy. 00:05:20.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.439 EAL: Restoring previous memory policy: 4 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was expanded by 34MB 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was shrunk by 34MB 00:05:20.439 EAL: Trying to obtain current memory policy. 00:05:20.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.439 EAL: Restoring previous memory policy: 4 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was expanded by 66MB 00:05:20.439 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.439 EAL: request: mp_malloc_sync 00:05:20.439 EAL: No shared files mode enabled, IPC is disabled 00:05:20.439 EAL: Heap on socket 0 was shrunk by 66MB 00:05:20.439 EAL: Trying to obtain current memory policy. 00:05:20.439 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.700 EAL: Restoring previous memory policy: 4 00:05:20.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.700 EAL: request: mp_malloc_sync 00:05:20.700 EAL: No shared files mode enabled, IPC is disabled 00:05:20.700 EAL: Heap on socket 0 was expanded by 130MB 00:05:20.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.700 EAL: request: mp_malloc_sync 00:05:20.700 EAL: No shared files mode enabled, IPC is disabled 00:05:20.700 EAL: Heap on socket 0 was shrunk by 130MB 00:05:20.700 EAL: Trying to obtain current memory policy. 00:05:20.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.700 EAL: Restoring previous memory policy: 4 00:05:20.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.700 EAL: request: mp_malloc_sync 00:05:20.700 EAL: No shared files mode enabled, IPC is disabled 00:05:20.700 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.700 EAL: request: mp_malloc_sync 00:05:20.700 EAL: No shared files mode enabled, IPC is disabled 00:05:20.700 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.700 EAL: Trying to obtain current memory policy. 00:05:20.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.700 EAL: Restoring previous memory policy: 4 00:05:20.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.700 EAL: request: mp_malloc_sync 00:05:20.700 EAL: No shared files mode enabled, IPC is disabled 00:05:20.700 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.700 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.961 EAL: request: mp_malloc_sync 00:05:20.961 EAL: No shared files mode enabled, IPC is disabled 00:05:20.961 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.961 EAL: Trying to obtain current memory policy. 00:05:20.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.961 EAL: Restoring previous memory policy: 4 00:05:20.961 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.961 EAL: request: mp_malloc_sync 00:05:20.961 EAL: No shared files mode enabled, IPC is disabled 00:05:20.961 EAL: Heap on socket 0 was expanded by 1026MB 00:05:20.961 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.221 EAL: request: mp_malloc_sync 00:05:21.221 EAL: No shared files mode enabled, IPC is disabled 00:05:21.221 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:21.221 passed 00:05:21.221 00:05:21.221 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.221 suites 1 1 n/a 0 0 00:05:21.221 tests 2 2 2 0 0 00:05:21.221 asserts 497 497 497 0 n/a 00:05:21.221 00:05:21.221 Elapsed time = 0.626 seconds 00:05:21.221 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.221 EAL: request: mp_malloc_sync 00:05:21.221 EAL: No shared files mode enabled, IPC is disabled 00:05:21.221 EAL: Heap on socket 0 was shrunk by 2MB 00:05:21.221 EAL: No shared files mode enabled, IPC is disabled 00:05:21.221 EAL: No shared files mode enabled, IPC is disabled 00:05:21.221 EAL: No shared files mode enabled, IPC is disabled 00:05:21.221 00:05:21.221 real 0m0.767s 00:05:21.221 user 0m0.402s 00:05:21.221 sys 0m0.339s 00:05:21.221 17:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.221 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.221 ************************************ 00:05:21.221 END TEST env_vtophys 00:05:21.221 ************************************ 00:05:21.221 17:42:25 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.221 17:42:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.221 17:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.222 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.222 ************************************ 00:05:21.222 START TEST env_pci 00:05:21.222 ************************************ 00:05:21.222 17:42:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:21.222 00:05:21.222 00:05:21.222 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.222 http://cunit.sourceforge.net/ 00:05:21.222 00:05:21.222 00:05:21.222 Suite: pci 00:05:21.222 Test: pci_hook ...[2024-07-22 17:42:25.383566] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1478422 has claimed it 00:05:21.222 EAL: Cannot find device (10000:00:01.0) 00:05:21.222 EAL: Failed to attach device on primary process 00:05:21.222 passed 00:05:21.222 00:05:21.222 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.222 suites 1 1 n/a 0 0 00:05:21.222 tests 1 1 1 0 0 00:05:21.222 asserts 25 25 25 0 n/a 00:05:21.222 00:05:21.222 Elapsed time = 0.034 seconds 00:05:21.222 00:05:21.222 real 0m0.055s 00:05:21.222 user 0m0.019s 00:05:21.222 sys 0m0.036s 00:05:21.222 17:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.222 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.222 ************************************ 00:05:21.222 END TEST env_pci 00:05:21.222 ************************************ 00:05:21.222 17:42:25 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:21.222 17:42:25 -- env/env.sh@15 -- # uname 00:05:21.222 17:42:25 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:21.222 17:42:25 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:21.222 17:42:25 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.222 17:42:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:21.222 17:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.222 17:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:21.222 ************************************ 00:05:21.222 START TEST env_dpdk_post_init 00:05:21.222 ************************************ 00:05:21.222 17:42:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:21.482 EAL: Detected CPU lcores: 128 00:05:21.482 EAL: Detected NUMA nodes: 2 00:05:21.482 EAL: Detected shared linkage of DPDK 00:05:21.482 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:21.482 EAL: Selected IOVA mode 'VA' 00:05:21.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.482 EAL: VFIO support initialized 00:05:21.482 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:21.482 EAL: Using IOMMU type 1 (Type 1) 00:05:21.482 EAL: Ignore mapping IO port bar(1) 00:05:21.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:21.742 EAL: Ignore mapping IO port bar(1) 00:05:22.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:22.003 EAL: Ignore mapping IO port bar(1) 00:05:22.263 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:22.263 EAL: Ignore mapping IO port bar(1) 00:05:22.263 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:22.523 EAL: Ignore mapping IO port bar(1) 00:05:22.523 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:22.783 EAL: Ignore mapping IO port bar(1) 00:05:22.783 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:23.044 EAL: Ignore mapping IO port bar(1) 00:05:23.044 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:23.044 EAL: Ignore mapping IO port bar(1) 00:05:23.304 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:23.884 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:65:00.0 (socket 0) 00:05:24.144 EAL: Ignore mapping IO port bar(1) 00:05:24.144 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:24.404 EAL: Ignore mapping IO port bar(1) 00:05:24.404 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:24.665 EAL: Ignore mapping IO port bar(1) 00:05:24.665 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:24.665 EAL: Ignore mapping IO port bar(1) 00:05:24.926 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:24.926 EAL: Ignore mapping IO port bar(1) 00:05:25.186 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:25.186 EAL: Ignore mapping IO port bar(1) 00:05:25.446 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:25.446 EAL: Ignore mapping IO port bar(1) 00:05:25.446 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:25.706 EAL: Ignore mapping IO port bar(1) 00:05:25.706 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:29.912 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:29.912 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:30.172 Starting DPDK initialization... 00:05:30.172 Starting SPDK post initialization... 00:05:30.172 SPDK NVMe probe 00:05:30.172 Attaching to 0000:65:00.0 00:05:30.172 Attached to 0000:65:00.0 00:05:30.172 Cleaning up... 00:05:32.084 00:05:32.084 real 0m10.404s 00:05:32.084 user 0m4.242s 00:05:32.084 sys 0m0.182s 00:05:32.084 17:42:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.084 17:42:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 ************************************ 00:05:32.084 END TEST env_dpdk_post_init 00:05:32.084 ************************************ 00:05:32.084 17:42:35 -- env/env.sh@26 -- # uname 00:05:32.084 17:42:35 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:32.084 17:42:35 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.084 17:42:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.084 17:42:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.084 17:42:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 ************************************ 00:05:32.084 START TEST env_mem_callbacks 00:05:32.084 ************************************ 00:05:32.084 17:42:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:32.084 EAL: Detected CPU lcores: 128 00:05:32.084 EAL: Detected NUMA nodes: 2 00:05:32.084 EAL: Detected shared linkage of DPDK 00:05:32.084 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.084 EAL: Selected IOVA mode 'VA' 00:05:32.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.084 EAL: VFIO support initialized 00:05:32.084 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.084 00:05:32.084 00:05:32.084 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.084 http://cunit.sourceforge.net/ 00:05:32.084 00:05:32.084 00:05:32.084 Suite: memory 00:05:32.084 Test: test ... 00:05:32.084 register 0x200000200000 2097152 00:05:32.084 malloc 3145728 00:05:32.084 register 0x200000400000 4194304 00:05:32.084 buf 0x200000500000 len 3145728 PASSED 00:05:32.084 malloc 64 00:05:32.084 buf 0x2000004fff40 len 64 PASSED 00:05:32.084 malloc 4194304 00:05:32.084 register 0x200000800000 6291456 00:05:32.084 buf 0x200000a00000 len 4194304 PASSED 00:05:32.084 free 0x200000500000 3145728 00:05:32.084 free 0x2000004fff40 64 00:05:32.084 unregister 0x200000400000 4194304 PASSED 00:05:32.084 free 0x200000a00000 4194304 00:05:32.084 unregister 0x200000800000 6291456 PASSED 00:05:32.084 malloc 8388608 00:05:32.084 register 0x200000400000 10485760 00:05:32.084 buf 0x200000600000 len 8388608 PASSED 00:05:32.084 free 0x200000600000 8388608 00:05:32.084 unregister 0x200000400000 10485760 PASSED 00:05:32.084 passed 00:05:32.084 00:05:32.084 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.084 suites 1 1 n/a 0 0 00:05:32.084 tests 1 1 1 0 0 00:05:32.084 asserts 15 15 15 0 n/a 00:05:32.084 00:05:32.084 Elapsed time = 0.010 seconds 00:05:32.084 00:05:32.084 real 0m0.070s 00:05:32.084 user 0m0.023s 00:05:32.084 sys 0m0.047s 00:05:32.084 17:42:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.084 17:42:35 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 ************************************ 00:05:32.084 END TEST env_mem_callbacks 00:05:32.084 ************************************ 00:05:32.084 00:05:32.084 real 0m11.817s 00:05:32.084 user 0m4.979s 00:05:32.084 sys 0m0.872s 00:05:32.084 17:42:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.084 17:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 ************************************ 00:05:32.084 END TEST env 00:05:32.084 ************************************ 00:05:32.084 17:42:36 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.084 17:42:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.084 17:42:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.084 17:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 ************************************ 00:05:32.084 START TEST rpc 00:05:32.084 ************************************ 00:05:32.084 17:42:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:32.084 * Looking for test storage... 00:05:32.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.084 17:42:36 -- rpc/rpc.sh@65 -- # spdk_pid=1480285 00:05:32.084 17:42:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.084 17:42:36 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:32.084 17:42:36 -- rpc/rpc.sh@67 -- # waitforlisten 1480285 00:05:32.084 17:42:36 -- common/autotest_common.sh@819 -- # '[' -z 1480285 ']' 00:05:32.084 17:42:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.084 17:42:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.084 17:42:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.084 17:42:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.084 17:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:32.084 [2024-07-22 17:42:36.238316] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:32.084 [2024-07-22 17:42:36.238403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480285 ] 00:05:32.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.084 [2024-07-22 17:42:36.323310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.345 [2024-07-22 17:42:36.389141] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:32.345 [2024-07-22 17:42:36.389256] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.345 [2024-07-22 17:42:36.389265] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1480285' to capture a snapshot of events at runtime. 00:05:32.345 [2024-07-22 17:42:36.389273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1480285 for offline analysis/debug. 00:05:32.345 [2024-07-22 17:42:36.389299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.916 17:42:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.916 17:42:37 -- common/autotest_common.sh@852 -- # return 0 00:05:32.916 17:42:37 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.916 17:42:37 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.916 17:42:37 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.916 17:42:37 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.916 17:42:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.916 17:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.916 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 ************************************ 00:05:32.916 START TEST rpc_integrity 00:05:32.916 ************************************ 00:05:32.916 17:42:37 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:32.916 17:42:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.916 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.916 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.916 17:42:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.916 17:42:37 -- rpc/rpc.sh@13 -- # jq length 00:05:32.916 17:42:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.916 17:42:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.916 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.916 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.916 17:42:37 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.916 17:42:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.916 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.916 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:32.916 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.916 17:42:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.916 { 00:05:32.916 "name": "Malloc0", 00:05:32.916 "aliases": [ 00:05:32.916 "2e2367b6-e9a7-4ee7-b3f5-fbc48734d76a" 00:05:32.916 ], 00:05:32.916 "product_name": "Malloc disk", 00:05:32.916 "block_size": 512, 00:05:32.916 "num_blocks": 16384, 00:05:32.916 "uuid": "2e2367b6-e9a7-4ee7-b3f5-fbc48734d76a", 00:05:32.916 "assigned_rate_limits": { 00:05:32.916 "rw_ios_per_sec": 0, 00:05:32.916 "rw_mbytes_per_sec": 0, 00:05:32.916 "r_mbytes_per_sec": 0, 00:05:32.916 "w_mbytes_per_sec": 0 00:05:32.916 }, 00:05:32.916 "claimed": false, 00:05:32.916 "zoned": false, 00:05:32.916 "supported_io_types": { 00:05:32.917 "read": true, 00:05:32.917 "write": true, 00:05:32.917 "unmap": true, 00:05:32.917 "write_zeroes": true, 00:05:32.917 "flush": true, 00:05:32.917 "reset": true, 00:05:32.917 "compare": false, 00:05:32.917 "compare_and_write": false, 00:05:32.917 "abort": true, 00:05:32.917 "nvme_admin": false, 00:05:32.917 "nvme_io": false 00:05:32.917 }, 00:05:32.917 "memory_domains": [ 00:05:32.917 { 00:05:32.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.917 "dma_device_type": 2 00:05:32.917 } 00:05:32.917 ], 00:05:32.917 "driver_specific": {} 00:05:32.917 } 00:05:32.917 ]' 00:05:32.917 17:42:37 -- rpc/rpc.sh@17 -- # jq length 00:05:33.178 17:42:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.178 17:42:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 [2024-07-22 17:42:37.220368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:33.178 [2024-07-22 17:42:37.220400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.178 [2024-07-22 17:42:37.220415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0ae50 00:05:33.178 [2024-07-22 17:42:37.220422] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.178 [2024-07-22 17:42:37.221629] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.178 [2024-07-22 17:42:37.221649] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.178 Passthru0 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.178 { 00:05:33.178 "name": "Malloc0", 00:05:33.178 "aliases": [ 00:05:33.178 "2e2367b6-e9a7-4ee7-b3f5-fbc48734d76a" 00:05:33.178 ], 00:05:33.178 "product_name": "Malloc disk", 00:05:33.178 "block_size": 512, 00:05:33.178 "num_blocks": 16384, 00:05:33.178 "uuid": "2e2367b6-e9a7-4ee7-b3f5-fbc48734d76a", 00:05:33.178 "assigned_rate_limits": { 00:05:33.178 "rw_ios_per_sec": 0, 00:05:33.178 "rw_mbytes_per_sec": 0, 00:05:33.178 "r_mbytes_per_sec": 0, 00:05:33.178 "w_mbytes_per_sec": 0 00:05:33.178 }, 00:05:33.178 "claimed": true, 00:05:33.178 "claim_type": "exclusive_write", 00:05:33.178 "zoned": false, 00:05:33.178 "supported_io_types": { 00:05:33.178 "read": true, 00:05:33.178 "write": true, 00:05:33.178 "unmap": true, 00:05:33.178 "write_zeroes": true, 00:05:33.178 "flush": true, 00:05:33.178 "reset": true, 00:05:33.178 "compare": false, 00:05:33.178 "compare_and_write": false, 00:05:33.178 "abort": true, 00:05:33.178 "nvme_admin": false, 00:05:33.178 "nvme_io": false 00:05:33.178 }, 00:05:33.178 "memory_domains": [ 00:05:33.178 { 00:05:33.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.178 "dma_device_type": 2 00:05:33.178 } 00:05:33.178 ], 00:05:33.178 "driver_specific": {} 00:05:33.178 }, 00:05:33.178 { 00:05:33.178 "name": "Passthru0", 00:05:33.178 "aliases": [ 00:05:33.178 "88c51d8b-8727-5857-a793-0ac62f275ee4" 00:05:33.178 ], 00:05:33.178 "product_name": "passthru", 00:05:33.178 "block_size": 512, 00:05:33.178 "num_blocks": 16384, 00:05:33.178 "uuid": "88c51d8b-8727-5857-a793-0ac62f275ee4", 00:05:33.178 "assigned_rate_limits": { 00:05:33.178 "rw_ios_per_sec": 0, 00:05:33.178 "rw_mbytes_per_sec": 0, 00:05:33.178 "r_mbytes_per_sec": 0, 00:05:33.178 "w_mbytes_per_sec": 0 00:05:33.178 }, 00:05:33.178 "claimed": false, 00:05:33.178 "zoned": false, 00:05:33.178 "supported_io_types": { 00:05:33.178 "read": true, 00:05:33.178 "write": true, 00:05:33.178 "unmap": true, 00:05:33.178 "write_zeroes": true, 00:05:33.178 "flush": true, 00:05:33.178 "reset": true, 00:05:33.178 "compare": false, 00:05:33.178 "compare_and_write": false, 00:05:33.178 "abort": true, 00:05:33.178 "nvme_admin": false, 00:05:33.178 "nvme_io": false 00:05:33.178 }, 00:05:33.178 "memory_domains": [ 00:05:33.178 { 00:05:33.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.178 "dma_device_type": 2 00:05:33.178 } 00:05:33.178 ], 00:05:33.178 "driver_specific": { 00:05:33.178 "passthru": { 00:05:33.178 "name": "Passthru0", 00:05:33.178 "base_bdev_name": "Malloc0" 00:05:33.178 } 00:05:33.178 } 00:05:33.178 } 00:05:33.178 ]' 00:05:33.178 17:42:37 -- rpc/rpc.sh@21 -- # jq length 00:05:33.178 17:42:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.178 17:42:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.178 17:42:37 -- rpc/rpc.sh@26 -- # jq length 00:05:33.178 17:42:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.178 00:05:33.178 real 0m0.282s 00:05:33.178 user 0m0.177s 00:05:33.178 sys 0m0.036s 00:05:33.178 17:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 ************************************ 00:05:33.178 END TEST rpc_integrity 00:05:33.178 ************************************ 00:05:33.178 17:42:37 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:33.178 17:42:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.178 17:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 ************************************ 00:05:33.178 START TEST rpc_plugins 00:05:33.178 ************************************ 00:05:33.178 17:42:37 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:33.178 17:42:37 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.178 17:42:37 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.178 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.178 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.178 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.178 17:42:37 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.178 { 00:05:33.178 "name": "Malloc1", 00:05:33.178 "aliases": [ 00:05:33.178 "507ddc22-aa7b-4f59-9a19-1a9f1264a793" 00:05:33.178 ], 00:05:33.178 "product_name": "Malloc disk", 00:05:33.178 "block_size": 4096, 00:05:33.178 "num_blocks": 256, 00:05:33.178 "uuid": "507ddc22-aa7b-4f59-9a19-1a9f1264a793", 00:05:33.178 "assigned_rate_limits": { 00:05:33.178 "rw_ios_per_sec": 0, 00:05:33.178 "rw_mbytes_per_sec": 0, 00:05:33.178 "r_mbytes_per_sec": 0, 00:05:33.178 "w_mbytes_per_sec": 0 00:05:33.178 }, 00:05:33.178 "claimed": false, 00:05:33.178 "zoned": false, 00:05:33.178 "supported_io_types": { 00:05:33.178 "read": true, 00:05:33.178 "write": true, 00:05:33.178 "unmap": true, 00:05:33.178 "write_zeroes": true, 00:05:33.178 "flush": true, 00:05:33.178 "reset": true, 00:05:33.178 "compare": false, 00:05:33.178 "compare_and_write": false, 00:05:33.178 "abort": true, 00:05:33.178 "nvme_admin": false, 00:05:33.178 "nvme_io": false 00:05:33.178 }, 00:05:33.178 "memory_domains": [ 00:05:33.178 { 00:05:33.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.178 "dma_device_type": 2 00:05:33.178 } 00:05:33.178 ], 00:05:33.178 "driver_specific": {} 00:05:33.178 } 00:05:33.178 ]' 00:05:33.178 17:42:37 -- rpc/rpc.sh@32 -- # jq length 00:05:33.439 17:42:37 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.439 17:42:37 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.439 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.439 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.439 17:42:37 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.439 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.439 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.439 17:42:37 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.439 17:42:37 -- rpc/rpc.sh@36 -- # jq length 00:05:33.439 17:42:37 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.439 00:05:33.439 real 0m0.146s 00:05:33.439 user 0m0.094s 00:05:33.439 sys 0m0.019s 00:05:33.439 17:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.439 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 ************************************ 00:05:33.439 END TEST rpc_plugins 00:05:33.439 ************************************ 00:05:33.439 17:42:37 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.439 17:42:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.439 17:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.439 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 ************************************ 00:05:33.439 START TEST rpc_trace_cmd_test 00:05:33.439 ************************************ 00:05:33.439 17:42:37 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:33.439 17:42:37 -- rpc/rpc.sh@40 -- # local info 00:05:33.439 17:42:37 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.439 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.439 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.439 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.439 17:42:37 -- rpc/rpc.sh@42 -- # info='{ 00:05:33.439 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1480285", 00:05:33.439 "tpoint_group_mask": "0x8", 00:05:33.439 "iscsi_conn": { 00:05:33.439 "mask": "0x2", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "scsi": { 00:05:33.439 "mask": "0x4", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "bdev": { 00:05:33.439 "mask": "0x8", 00:05:33.439 "tpoint_mask": "0xffffffffffffffff" 00:05:33.439 }, 00:05:33.439 "nvmf_rdma": { 00:05:33.439 "mask": "0x10", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "nvmf_tcp": { 00:05:33.439 "mask": "0x20", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "ftl": { 00:05:33.439 "mask": "0x40", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "blobfs": { 00:05:33.439 "mask": "0x80", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "dsa": { 00:05:33.439 "mask": "0x200", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "thread": { 00:05:33.439 "mask": "0x400", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "nvme_pcie": { 00:05:33.439 "mask": "0x800", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "iaa": { 00:05:33.439 "mask": "0x1000", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "nvme_tcp": { 00:05:33.439 "mask": "0x2000", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 }, 00:05:33.439 "bdev_nvme": { 00:05:33.439 "mask": "0x4000", 00:05:33.439 "tpoint_mask": "0x0" 00:05:33.439 } 00:05:33.439 }' 00:05:33.439 17:42:37 -- rpc/rpc.sh@43 -- # jq length 00:05:33.439 17:42:37 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:33.439 17:42:37 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.439 17:42:37 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.439 17:42:37 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.700 17:42:37 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.700 17:42:37 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.700 17:42:37 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.700 17:42:37 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.700 17:42:37 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.700 00:05:33.700 real 0m0.257s 00:05:33.700 user 0m0.227s 00:05:33.700 sys 0m0.023s 00:05:33.700 17:42:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.700 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 ************************************ 00:05:33.700 END TEST rpc_trace_cmd_test 00:05:33.700 ************************************ 00:05:33.700 17:42:37 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.700 17:42:37 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.700 17:42:37 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.700 17:42:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.700 17:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.700 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 ************************************ 00:05:33.700 START TEST rpc_daemon_integrity 00:05:33.700 ************************************ 00:05:33.700 17:42:37 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:33.700 17:42:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.700 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.700 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.700 17:42:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.700 17:42:37 -- rpc/rpc.sh@13 -- # jq length 00:05:33.700 17:42:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.700 17:42:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.700 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.700 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.700 17:42:37 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.700 17:42:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.700 17:42:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.700 17:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:33.700 17:42:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.700 17:42:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.700 { 00:05:33.700 "name": "Malloc2", 00:05:33.700 "aliases": [ 00:05:33.700 "ee988d3f-1634-4fa4-bd77-b0d7c75f99e3" 00:05:33.700 ], 00:05:33.700 "product_name": "Malloc disk", 00:05:33.700 "block_size": 512, 00:05:33.700 "num_blocks": 16384, 00:05:33.700 "uuid": "ee988d3f-1634-4fa4-bd77-b0d7c75f99e3", 00:05:33.700 "assigned_rate_limits": { 00:05:33.700 "rw_ios_per_sec": 0, 00:05:33.700 "rw_mbytes_per_sec": 0, 00:05:33.700 "r_mbytes_per_sec": 0, 00:05:33.700 "w_mbytes_per_sec": 0 00:05:33.700 }, 00:05:33.700 "claimed": false, 00:05:33.700 "zoned": false, 00:05:33.700 "supported_io_types": { 00:05:33.700 "read": true, 00:05:33.700 "write": true, 00:05:33.700 "unmap": true, 00:05:33.700 "write_zeroes": true, 00:05:33.700 "flush": true, 00:05:33.700 "reset": true, 00:05:33.700 "compare": false, 00:05:33.700 "compare_and_write": false, 00:05:33.700 "abort": true, 00:05:33.700 "nvme_admin": false, 00:05:33.700 "nvme_io": false 00:05:33.700 }, 00:05:33.700 "memory_domains": [ 00:05:33.700 { 00:05:33.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.700 "dma_device_type": 2 00:05:33.700 } 00:05:33.700 ], 00:05:33.700 "driver_specific": {} 00:05:33.700 } 00:05:33.700 ]' 00:05:33.700 17:42:37 -- rpc/rpc.sh@17 -- # jq length 00:05:33.960 17:42:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.960 17:42:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.960 17:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.960 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.960 [2024-07-22 17:42:38.022541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.960 [2024-07-22 17:42:38.022570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.960 [2024-07-22 17:42:38.022583] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb0c750 00:05:33.960 [2024-07-22 17:42:38.022589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.960 [2024-07-22 17:42:38.023710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.960 [2024-07-22 17:42:38.023729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.960 Passthru0 00:05:33.960 17:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.960 17:42:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.960 17:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.960 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.960 17:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.960 17:42:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.960 { 00:05:33.960 "name": "Malloc2", 00:05:33.960 "aliases": [ 00:05:33.960 "ee988d3f-1634-4fa4-bd77-b0d7c75f99e3" 00:05:33.960 ], 00:05:33.960 "product_name": "Malloc disk", 00:05:33.960 "block_size": 512, 00:05:33.960 "num_blocks": 16384, 00:05:33.960 "uuid": "ee988d3f-1634-4fa4-bd77-b0d7c75f99e3", 00:05:33.960 "assigned_rate_limits": { 00:05:33.960 "rw_ios_per_sec": 0, 00:05:33.960 "rw_mbytes_per_sec": 0, 00:05:33.960 "r_mbytes_per_sec": 0, 00:05:33.960 "w_mbytes_per_sec": 0 00:05:33.960 }, 00:05:33.960 "claimed": true, 00:05:33.960 "claim_type": "exclusive_write", 00:05:33.960 "zoned": false, 00:05:33.960 "supported_io_types": { 00:05:33.960 "read": true, 00:05:33.960 "write": true, 00:05:33.960 "unmap": true, 00:05:33.960 "write_zeroes": true, 00:05:33.960 "flush": true, 00:05:33.960 "reset": true, 00:05:33.960 "compare": false, 00:05:33.960 "compare_and_write": false, 00:05:33.961 "abort": true, 00:05:33.961 "nvme_admin": false, 00:05:33.961 "nvme_io": false 00:05:33.961 }, 00:05:33.961 "memory_domains": [ 00:05:33.961 { 00:05:33.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.961 "dma_device_type": 2 00:05:33.961 } 00:05:33.961 ], 00:05:33.961 "driver_specific": {} 00:05:33.961 }, 00:05:33.961 { 00:05:33.961 "name": "Passthru0", 00:05:33.961 "aliases": [ 00:05:33.961 "78dbb91f-fd58-5edc-af41-4e54b4fbfe4c" 00:05:33.961 ], 00:05:33.961 "product_name": "passthru", 00:05:33.961 "block_size": 512, 00:05:33.961 "num_blocks": 16384, 00:05:33.961 "uuid": "78dbb91f-fd58-5edc-af41-4e54b4fbfe4c", 00:05:33.961 "assigned_rate_limits": { 00:05:33.961 "rw_ios_per_sec": 0, 00:05:33.961 "rw_mbytes_per_sec": 0, 00:05:33.961 "r_mbytes_per_sec": 0, 00:05:33.961 "w_mbytes_per_sec": 0 00:05:33.961 }, 00:05:33.961 "claimed": false, 00:05:33.961 "zoned": false, 00:05:33.961 "supported_io_types": { 00:05:33.961 "read": true, 00:05:33.961 "write": true, 00:05:33.961 "unmap": true, 00:05:33.961 "write_zeroes": true, 00:05:33.961 "flush": true, 00:05:33.961 "reset": true, 00:05:33.961 "compare": false, 00:05:33.961 "compare_and_write": false, 00:05:33.961 "abort": true, 00:05:33.961 "nvme_admin": false, 00:05:33.961 "nvme_io": false 00:05:33.961 }, 00:05:33.961 "memory_domains": [ 00:05:33.961 { 00:05:33.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.961 "dma_device_type": 2 00:05:33.961 } 00:05:33.961 ], 00:05:33.961 "driver_specific": { 00:05:33.961 "passthru": { 00:05:33.961 "name": "Passthru0", 00:05:33.961 "base_bdev_name": "Malloc2" 00:05:33.961 } 00:05:33.961 } 00:05:33.961 } 00:05:33.961 ]' 00:05:33.961 17:42:38 -- rpc/rpc.sh@21 -- # jq length 00:05:33.961 17:42:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.961 17:42:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.961 17:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.961 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.961 17:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.961 17:42:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.961 17:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.961 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.961 17:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.961 17:42:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.961 17:42:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.961 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.961 17:42:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.961 17:42:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.961 17:42:38 -- rpc/rpc.sh@26 -- # jq length 00:05:33.961 17:42:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.961 00:05:33.961 real 0m0.275s 00:05:33.961 user 0m0.165s 00:05:33.961 sys 0m0.041s 00:05:33.961 17:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.961 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:33.961 ************************************ 00:05:33.961 END TEST rpc_daemon_integrity 00:05:33.961 ************************************ 00:05:33.961 17:42:38 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.961 17:42:38 -- rpc/rpc.sh@84 -- # killprocess 1480285 00:05:33.961 17:42:38 -- common/autotest_common.sh@926 -- # '[' -z 1480285 ']' 00:05:33.961 17:42:38 -- common/autotest_common.sh@930 -- # kill -0 1480285 00:05:33.961 17:42:38 -- common/autotest_common.sh@931 -- # uname 00:05:33.961 17:42:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.961 17:42:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1480285 00:05:34.221 17:42:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:34.221 17:42:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:34.221 17:42:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1480285' 00:05:34.221 killing process with pid 1480285 00:05:34.221 17:42:38 -- common/autotest_common.sh@945 -- # kill 1480285 00:05:34.221 17:42:38 -- common/autotest_common.sh@950 -- # wait 1480285 00:05:34.221 00:05:34.221 real 0m2.357s 00:05:34.221 user 0m3.130s 00:05:34.221 sys 0m0.613s 00:05:34.221 17:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.221 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.221 ************************************ 00:05:34.221 END TEST rpc 00:05:34.221 ************************************ 00:05:34.221 17:42:38 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:34.221 17:42:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.221 17:42:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.221 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.221 ************************************ 00:05:34.221 START TEST rpc_client 00:05:34.221 ************************************ 00:05:34.222 17:42:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:34.483 * Looking for test storage... 00:05:34.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:34.483 17:42:38 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:34.483 OK 00:05:34.483 17:42:38 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.483 00:05:34.483 real 0m0.119s 00:05:34.483 user 0m0.055s 00:05:34.483 sys 0m0.071s 00:05:34.483 17:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.483 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.483 ************************************ 00:05:34.483 END TEST rpc_client 00:05:34.483 ************************************ 00:05:34.483 17:42:38 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:34.483 17:42:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.483 17:42:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.483 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.483 ************************************ 00:05:34.483 START TEST json_config 00:05:34.483 ************************************ 00:05:34.483 17:42:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:34.483 17:42:38 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.483 17:42:38 -- nvmf/common.sh@7 -- # uname -s 00:05:34.483 17:42:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.483 17:42:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.483 17:42:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.483 17:42:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.483 17:42:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.483 17:42:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.483 17:42:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.483 17:42:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.483 17:42:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.483 17:42:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.483 17:42:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:34.483 17:42:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:34.483 17:42:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.483 17:42:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.483 17:42:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.483 17:42:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:34.483 17:42:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.483 17:42:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.483 17:42:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.483 17:42:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.483 17:42:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.483 17:42:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.483 17:42:38 -- paths/export.sh@5 -- # export PATH 00:05:34.483 17:42:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.483 17:42:38 -- nvmf/common.sh@46 -- # : 0 00:05:34.483 17:42:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:34.483 17:42:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:34.483 17:42:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:34.483 17:42:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.483 17:42:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.483 17:42:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:34.483 17:42:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:34.483 17:42:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:34.483 17:42:38 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:34.483 17:42:38 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:34.483 17:42:38 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:34.483 17:42:38 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.483 17:42:38 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:34.483 17:42:38 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:34.483 17:42:38 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:34.483 17:42:38 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:34.483 17:42:38 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:34.483 17:42:38 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:34.483 17:42:38 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:34.483 17:42:38 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:34.483 17:42:38 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:34.483 17:42:38 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.483 17:42:38 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:34.483 INFO: JSON configuration test init 00:05:34.483 17:42:38 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:34.483 17:42:38 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:34.483 17:42:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:34.483 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.745 17:42:38 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:34.745 17:42:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:34.745 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.745 17:42:38 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:34.745 17:42:38 -- json_config/json_config.sh@98 -- # local app=target 00:05:34.745 17:42:38 -- json_config/json_config.sh@99 -- # shift 00:05:34.745 17:42:38 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:34.745 17:42:38 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:34.745 17:42:38 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:34.745 17:42:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.745 17:42:38 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.745 17:42:38 -- json_config/json_config.sh@111 -- # app_pid[$app]=1480940 00:05:34.745 17:42:38 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:34.745 Waiting for target to run... 00:05:34.745 17:42:38 -- json_config/json_config.sh@114 -- # waitforlisten 1480940 /var/tmp/spdk_tgt.sock 00:05:34.745 17:42:38 -- common/autotest_common.sh@819 -- # '[' -z 1480940 ']' 00:05:34.745 17:42:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.745 17:42:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.745 17:42:38 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:34.745 17:42:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.745 17:42:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.745 17:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:34.745 [2024-07-22 17:42:38.817991] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:34.745 [2024-07-22 17:42:38.818070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480940 ] 00:05:34.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.006 [2024-07-22 17:42:39.094178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.006 [2024-07-22 17:42:39.146116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:35.006 [2024-07-22 17:42:39.146255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.577 17:42:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.577 17:42:39 -- common/autotest_common.sh@852 -- # return 0 00:05:35.577 17:42:39 -- json_config/json_config.sh@115 -- # echo '' 00:05:35.577 00:05:35.577 17:42:39 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:35.577 17:42:39 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:35.577 17:42:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:35.577 17:42:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.577 17:42:39 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:35.577 17:42:39 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:35.577 17:42:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:35.577 17:42:39 -- common/autotest_common.sh@10 -- # set +x 00:05:35.577 17:42:39 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:35.577 17:42:39 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:35.577 17:42:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:38.933 17:42:42 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:38.933 17:42:42 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:38.933 17:42:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.933 17:42:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.933 17:42:42 -- json_config/json_config.sh@48 -- # local ret=0 00:05:38.933 17:42:42 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:38.933 17:42:42 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:38.933 17:42:42 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:38.933 17:42:42 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:38.933 17:42:42 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:38.933 17:42:42 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:38.933 17:42:42 -- json_config/json_config.sh@51 -- # local get_types 00:05:38.933 17:42:42 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:38.933 17:42:42 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:38.933 17:42:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.933 17:42:42 -- common/autotest_common.sh@10 -- # set +x 00:05:38.933 17:42:43 -- json_config/json_config.sh@58 -- # return 0 00:05:38.933 17:42:43 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:38.933 17:42:43 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:38.933 17:42:43 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:38.933 17:42:43 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:38.934 17:42:43 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:38.934 17:42:43 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:38.934 17:42:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.934 17:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:38.934 17:42:43 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:38.934 17:42:43 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:38.934 17:42:43 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:38.934 17:42:43 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.934 17:42:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:38.934 MallocForNvmf0 00:05:39.194 17:42:43 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:39.194 17:42:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:39.194 MallocForNvmf1 00:05:39.194 17:42:43 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:39.194 17:42:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:39.454 [2024-07-22 17:42:43.570317] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.454 17:42:43 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.454 17:42:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:39.714 17:42:43 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.714 17:42:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:39.714 17:42:43 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.714 17:42:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:39.974 17:42:44 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:39.974 17:42:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:40.234 [2024-07-22 17:42:44.300725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:40.234 17:42:44 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:40.234 17:42:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.234 17:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 17:42:44 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:40.234 17:42:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.234 17:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 17:42:44 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:40.234 17:42:44 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.234 17:42:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:40.493 MallocBdevForConfigChangeCheck 00:05:40.494 17:42:44 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:40.494 17:42:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:40.494 17:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:40.494 17:42:44 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:40.494 17:42:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.753 17:42:44 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:40.753 INFO: shutting down applications... 00:05:40.753 17:42:44 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:40.753 17:42:44 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:40.753 17:42:44 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:40.753 17:42:44 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:43.295 Calling clear_iscsi_subsystem 00:05:43.295 Calling clear_nvmf_subsystem 00:05:43.295 Calling clear_nbd_subsystem 00:05:43.295 Calling clear_ublk_subsystem 00:05:43.295 Calling clear_vhost_blk_subsystem 00:05:43.295 Calling clear_vhost_scsi_subsystem 00:05:43.295 Calling clear_scheduler_subsystem 00:05:43.295 Calling clear_bdev_subsystem 00:05:43.295 Calling clear_accel_subsystem 00:05:43.295 Calling clear_vmd_subsystem 00:05:43.295 Calling clear_sock_subsystem 00:05:43.295 Calling clear_iobuf_subsystem 00:05:43.295 17:42:47 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:43.295 17:42:47 -- json_config/json_config.sh@396 -- # count=100 00:05:43.295 17:42:47 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:43.295 17:42:47 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.295 17:42:47 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:43.295 17:42:47 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:43.555 17:42:47 -- json_config/json_config.sh@398 -- # break 00:05:43.555 17:42:47 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:43.555 17:42:47 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:43.555 17:42:47 -- json_config/json_config.sh@120 -- # local app=target 00:05:43.555 17:42:47 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:43.555 17:42:47 -- json_config/json_config.sh@124 -- # [[ -n 1480940 ]] 00:05:43.556 17:42:47 -- json_config/json_config.sh@127 -- # kill -SIGINT 1480940 00:05:43.556 17:42:47 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:43.556 17:42:47 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:43.556 17:42:47 -- json_config/json_config.sh@130 -- # kill -0 1480940 00:05:43.556 17:42:47 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:44.126 17:42:48 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:44.126 17:42:48 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:44.126 17:42:48 -- json_config/json_config.sh@130 -- # kill -0 1480940 00:05:44.126 17:42:48 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:44.126 17:42:48 -- json_config/json_config.sh@132 -- # break 00:05:44.126 17:42:48 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:44.126 17:42:48 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:44.126 SPDK target shutdown done 00:05:44.126 17:42:48 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:44.126 INFO: relaunching applications... 00:05:44.126 17:42:48 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.126 17:42:48 -- json_config/json_config.sh@98 -- # local app=target 00:05:44.126 17:42:48 -- json_config/json_config.sh@99 -- # shift 00:05:44.126 17:42:48 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:44.126 17:42:48 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:44.126 17:42:48 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:44.126 17:42:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:44.126 17:42:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:44.126 17:42:48 -- json_config/json_config.sh@111 -- # app_pid[$app]=1482619 00:05:44.126 17:42:48 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:44.126 Waiting for target to run... 00:05:44.126 17:42:48 -- json_config/json_config.sh@114 -- # waitforlisten 1482619 /var/tmp/spdk_tgt.sock 00:05:44.126 17:42:48 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.126 17:42:48 -- common/autotest_common.sh@819 -- # '[' -z 1482619 ']' 00:05:44.126 17:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.127 17:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.127 17:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.127 17:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.127 17:42:48 -- common/autotest_common.sh@10 -- # set +x 00:05:44.127 [2024-07-22 17:42:48.316373] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:44.127 [2024-07-22 17:42:48.316424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482619 ] 00:05:44.127 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.388 [2024-07-22 17:42:48.647118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.648 [2024-07-22 17:42:48.706978] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.648 [2024-07-22 17:42:48.707109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.943 [2024-07-22 17:42:51.716646] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.943 [2024-07-22 17:42:51.749116] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.943 17:42:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.943 17:42:51 -- common/autotest_common.sh@852 -- # return 0 00:05:47.943 17:42:51 -- json_config/json_config.sh@115 -- # echo '' 00:05:47.943 00:05:47.943 17:42:51 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:47.943 17:42:51 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:47.943 INFO: Checking if target configuration is the same... 00:05:47.943 17:42:51 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.943 17:42:51 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:47.943 17:42:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.943 + '[' 2 -ne 2 ']' 00:05:47.943 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:47.943 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:47.943 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:47.943 +++ basename /dev/fd/62 00:05:47.943 ++ mktemp /tmp/62.XXX 00:05:47.943 + tmp_file_1=/tmp/62.ke7 00:05:47.943 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:47.943 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:47.943 + tmp_file_2=/tmp/spdk_tgt_config.json.6z2 00:05:47.943 + ret=0 00:05:47.943 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.943 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.944 + diff -u /tmp/62.ke7 /tmp/spdk_tgt_config.json.6z2 00:05:47.944 + echo 'INFO: JSON config files are the same' 00:05:47.944 INFO: JSON config files are the same 00:05:47.944 + rm /tmp/62.ke7 /tmp/spdk_tgt_config.json.6z2 00:05:47.944 + exit 0 00:05:47.944 17:42:52 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:47.944 17:42:52 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:47.944 INFO: changing configuration and checking if this can be detected... 00:05:47.944 17:42:52 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.944 17:42:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.204 17:42:52 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.204 17:42:52 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:48.204 17:42:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.204 + '[' 2 -ne 2 ']' 00:05:48.204 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.204 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:48.204 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.204 +++ basename /dev/fd/62 00:05:48.204 ++ mktemp /tmp/62.XXX 00:05:48.204 + tmp_file_1=/tmp/62.WzW 00:05:48.204 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.204 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.204 + tmp_file_2=/tmp/spdk_tgt_config.json.cUl 00:05:48.204 + ret=0 00:05:48.204 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.465 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.465 + diff -u /tmp/62.WzW /tmp/spdk_tgt_config.json.cUl 00:05:48.465 + ret=1 00:05:48.465 + echo '=== Start of file: /tmp/62.WzW ===' 00:05:48.465 + cat /tmp/62.WzW 00:05:48.465 + echo '=== End of file: /tmp/62.WzW ===' 00:05:48.465 + echo '' 00:05:48.465 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cUl ===' 00:05:48.465 + cat /tmp/spdk_tgt_config.json.cUl 00:05:48.465 + echo '=== End of file: /tmp/spdk_tgt_config.json.cUl ===' 00:05:48.465 + echo '' 00:05:48.465 + rm /tmp/62.WzW /tmp/spdk_tgt_config.json.cUl 00:05:48.465 + exit 1 00:05:48.465 17:42:52 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:48.465 INFO: configuration change detected. 00:05:48.465 17:42:52 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:48.465 17:42:52 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:48.465 17:42:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:48.465 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.725 17:42:52 -- json_config/json_config.sh@360 -- # local ret=0 00:05:48.725 17:42:52 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:48.725 17:42:52 -- json_config/json_config.sh@370 -- # [[ -n 1482619 ]] 00:05:48.725 17:42:52 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:48.725 17:42:52 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:48.725 17:42:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:48.725 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.725 17:42:52 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:48.725 17:42:52 -- json_config/json_config.sh@246 -- # uname -s 00:05:48.725 17:42:52 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:48.725 17:42:52 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:48.725 17:42:52 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:48.725 17:42:52 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:48.725 17:42:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:48.725 17:42:52 -- common/autotest_common.sh@10 -- # set +x 00:05:48.725 17:42:52 -- json_config/json_config.sh@376 -- # killprocess 1482619 00:05:48.725 17:42:52 -- common/autotest_common.sh@926 -- # '[' -z 1482619 ']' 00:05:48.725 17:42:52 -- common/autotest_common.sh@930 -- # kill -0 1482619 00:05:48.725 17:42:52 -- common/autotest_common.sh@931 -- # uname 00:05:48.725 17:42:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:48.725 17:42:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1482619 00:05:48.725 17:42:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:48.725 17:42:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:48.725 17:42:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1482619' 00:05:48.725 killing process with pid 1482619 00:05:48.725 17:42:52 -- common/autotest_common.sh@945 -- # kill 1482619 00:05:48.725 17:42:52 -- common/autotest_common.sh@950 -- # wait 1482619 00:05:51.268 17:42:55 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.268 17:42:55 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:51.268 17:42:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.268 17:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 17:42:55 -- json_config/json_config.sh@381 -- # return 0 00:05:51.268 17:42:55 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:51.268 INFO: Success 00:05:51.268 00:05:51.268 real 0m16.527s 00:05:51.268 user 0m17.889s 00:05:51.268 sys 0m1.882s 00:05:51.268 17:42:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.268 17:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 ************************************ 00:05:51.268 END TEST json_config 00:05:51.268 ************************************ 00:05:51.268 17:42:55 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:51.268 17:42:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.268 17:42:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.268 17:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 ************************************ 00:05:51.268 START TEST json_config_extra_key 00:05:51.268 ************************************ 00:05:51.268 17:42:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.268 17:42:55 -- nvmf/common.sh@7 -- # uname -s 00:05:51.268 17:42:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.268 17:42:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.268 17:42:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.268 17:42:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.268 17:42:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.268 17:42:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.268 17:42:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.268 17:42:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.268 17:42:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.268 17:42:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.268 17:42:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:51.268 17:42:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:05:51.268 17:42:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.268 17:42:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.268 17:42:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.268 17:42:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.268 17:42:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.268 17:42:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.268 17:42:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.268 17:42:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.268 17:42:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.268 17:42:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.268 17:42:55 -- paths/export.sh@5 -- # export PATH 00:05:51.268 17:42:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.268 17:42:55 -- nvmf/common.sh@46 -- # : 0 00:05:51.268 17:42:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:51.268 17:42:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:51.268 17:42:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:51.268 17:42:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.268 17:42:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.268 17:42:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:51.268 17:42:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:51.268 17:42:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:51.268 INFO: launching applications... 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1483975 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:51.268 Waiting for target to run... 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1483975 /var/tmp/spdk_tgt.sock 00:05:51.268 17:42:55 -- common/autotest_common.sh@819 -- # '[' -z 1483975 ']' 00:05:51.268 17:42:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.268 17:42:55 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:51.268 17:42:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.268 17:42:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.268 17:42:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.268 17:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 [2024-07-22 17:42:55.377627] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.268 [2024-07-22 17:42:55.377691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483975 ] 00:05:51.268 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.529 [2024-07-22 17:42:55.615110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.529 [2024-07-22 17:42:55.664103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.529 [2024-07-22 17:42:55.664228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.100 17:42:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.100 17:42:56 -- common/autotest_common.sh@852 -- # return 0 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:52.100 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:52.100 INFO: shutting down applications... 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1483975 ]] 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1483975 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1483975 00:05:52.100 17:42:56 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1483975 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:52.671 SPDK target shutdown done 00:05:52.671 17:42:56 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:52.671 Success 00:05:52.671 00:05:52.671 real 0m1.479s 00:05:52.671 user 0m1.210s 00:05:52.671 sys 0m0.332s 00:05:52.671 17:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.671 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.671 ************************************ 00:05:52.671 END TEST json_config_extra_key 00:05:52.671 ************************************ 00:05:52.671 17:42:56 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.671 17:42:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.671 17:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.671 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.671 ************************************ 00:05:52.671 START TEST alias_rpc 00:05:52.671 ************************************ 00:05:52.671 17:42:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.671 * Looking for test storage... 00:05:52.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:52.671 17:42:56 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.671 17:42:56 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1484324 00:05:52.671 17:42:56 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1484324 00:05:52.672 17:42:56 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.672 17:42:56 -- common/autotest_common.sh@819 -- # '[' -z 1484324 ']' 00:05:52.672 17:42:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.672 17:42:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.672 17:42:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.672 17:42:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.672 17:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:52.672 [2024-07-22 17:42:56.898287] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:52.672 [2024-07-22 17:42:56.898353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484324 ] 00:05:52.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.933 [2024-07-22 17:42:56.978316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.933 [2024-07-22 17:42:57.040033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.933 [2024-07-22 17:42:57.040153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.504 17:42:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.504 17:42:57 -- common/autotest_common.sh@852 -- # return 0 00:05:53.504 17:42:57 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:53.765 17:42:57 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1484324 00:05:53.765 17:42:57 -- common/autotest_common.sh@926 -- # '[' -z 1484324 ']' 00:05:53.765 17:42:57 -- common/autotest_common.sh@930 -- # kill -0 1484324 00:05:53.765 17:42:57 -- common/autotest_common.sh@931 -- # uname 00:05:53.765 17:42:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.765 17:42:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1484324 00:05:53.765 17:42:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.765 17:42:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.765 17:42:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1484324' 00:05:53.765 killing process with pid 1484324 00:05:53.765 17:42:57 -- common/autotest_common.sh@945 -- # kill 1484324 00:05:53.765 17:42:57 -- common/autotest_common.sh@950 -- # wait 1484324 00:05:54.026 00:05:54.026 real 0m1.413s 00:05:54.026 user 0m1.619s 00:05:54.026 sys 0m0.365s 00:05:54.026 17:42:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.026 17:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.026 ************************************ 00:05:54.026 END TEST alias_rpc 00:05:54.026 ************************************ 00:05:54.026 17:42:58 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:54.026 17:42:58 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.026 17:42:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.026 17:42:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.026 17:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.026 ************************************ 00:05:54.026 START TEST spdkcli_tcp 00:05:54.026 ************************************ 00:05:54.026 17:42:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:54.026 * Looking for test storage... 00:05:54.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:54.287 17:42:58 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:54.287 17:42:58 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.287 17:42:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:54.287 17:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1484682 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@27 -- # waitforlisten 1484682 00:05:54.287 17:42:58 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.287 17:42:58 -- common/autotest_common.sh@819 -- # '[' -z 1484682 ']' 00:05:54.287 17:42:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.287 17:42:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.287 17:42:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.287 17:42:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.287 17:42:58 -- common/autotest_common.sh@10 -- # set +x 00:05:54.287 [2024-07-22 17:42:58.367196] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:54.287 [2024-07-22 17:42:58.367267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484682 ] 00:05:54.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.287 [2024-07-22 17:42:58.453913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.287 [2024-07-22 17:42:58.528750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.287 [2024-07-22 17:42:58.528995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.287 [2024-07-22 17:42:58.528999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.229 17:42:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.229 17:42:59 -- common/autotest_common.sh@852 -- # return 0 00:05:55.229 17:42:59 -- spdkcli/tcp.sh@31 -- # socat_pid=1484701 00:05:55.229 17:42:59 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.229 17:42:59 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.229 [ 00:05:55.229 "bdev_malloc_delete", 00:05:55.229 "bdev_malloc_create", 00:05:55.229 "bdev_null_resize", 00:05:55.229 "bdev_null_delete", 00:05:55.230 "bdev_null_create", 00:05:55.230 "bdev_nvme_cuse_unregister", 00:05:55.230 "bdev_nvme_cuse_register", 00:05:55.230 "bdev_opal_new_user", 00:05:55.230 "bdev_opal_set_lock_state", 00:05:55.230 "bdev_opal_delete", 00:05:55.230 "bdev_opal_get_info", 00:05:55.230 "bdev_opal_create", 00:05:55.230 "bdev_nvme_opal_revert", 00:05:55.230 "bdev_nvme_opal_init", 00:05:55.230 "bdev_nvme_send_cmd", 00:05:55.230 "bdev_nvme_get_path_iostat", 00:05:55.230 "bdev_nvme_get_mdns_discovery_info", 00:05:55.230 "bdev_nvme_stop_mdns_discovery", 00:05:55.230 "bdev_nvme_start_mdns_discovery", 00:05:55.230 "bdev_nvme_set_multipath_policy", 00:05:55.230 "bdev_nvme_set_preferred_path", 00:05:55.230 "bdev_nvme_get_io_paths", 00:05:55.230 "bdev_nvme_remove_error_injection", 00:05:55.230 "bdev_nvme_add_error_injection", 00:05:55.230 "bdev_nvme_get_discovery_info", 00:05:55.230 "bdev_nvme_stop_discovery", 00:05:55.230 "bdev_nvme_start_discovery", 00:05:55.230 "bdev_nvme_get_controller_health_info", 00:05:55.230 "bdev_nvme_disable_controller", 00:05:55.230 "bdev_nvme_enable_controller", 00:05:55.230 "bdev_nvme_reset_controller", 00:05:55.230 "bdev_nvme_get_transport_statistics", 00:05:55.230 "bdev_nvme_apply_firmware", 00:05:55.230 "bdev_nvme_detach_controller", 00:05:55.230 "bdev_nvme_get_controllers", 00:05:55.230 "bdev_nvme_attach_controller", 00:05:55.230 "bdev_nvme_set_hotplug", 00:05:55.230 "bdev_nvme_set_options", 00:05:55.230 "bdev_passthru_delete", 00:05:55.230 "bdev_passthru_create", 00:05:55.230 "bdev_lvol_grow_lvstore", 00:05:55.230 "bdev_lvol_get_lvols", 00:05:55.230 "bdev_lvol_get_lvstores", 00:05:55.230 "bdev_lvol_delete", 00:05:55.230 "bdev_lvol_set_read_only", 00:05:55.230 "bdev_lvol_resize", 00:05:55.230 "bdev_lvol_decouple_parent", 00:05:55.230 "bdev_lvol_inflate", 00:05:55.230 "bdev_lvol_rename", 00:05:55.230 "bdev_lvol_clone_bdev", 00:05:55.230 "bdev_lvol_clone", 00:05:55.230 "bdev_lvol_snapshot", 00:05:55.230 "bdev_lvol_create", 00:05:55.230 "bdev_lvol_delete_lvstore", 00:05:55.230 "bdev_lvol_rename_lvstore", 00:05:55.230 "bdev_lvol_create_lvstore", 00:05:55.230 "bdev_raid_set_options", 00:05:55.230 "bdev_raid_remove_base_bdev", 00:05:55.230 "bdev_raid_add_base_bdev", 00:05:55.230 "bdev_raid_delete", 00:05:55.230 "bdev_raid_create", 00:05:55.230 "bdev_raid_get_bdevs", 00:05:55.230 "bdev_error_inject_error", 00:05:55.230 "bdev_error_delete", 00:05:55.230 "bdev_error_create", 00:05:55.230 "bdev_split_delete", 00:05:55.230 "bdev_split_create", 00:05:55.230 "bdev_delay_delete", 00:05:55.230 "bdev_delay_create", 00:05:55.230 "bdev_delay_update_latency", 00:05:55.230 "bdev_zone_block_delete", 00:05:55.230 "bdev_zone_block_create", 00:05:55.230 "blobfs_create", 00:05:55.230 "blobfs_detect", 00:05:55.230 "blobfs_set_cache_size", 00:05:55.230 "bdev_aio_delete", 00:05:55.230 "bdev_aio_rescan", 00:05:55.230 "bdev_aio_create", 00:05:55.230 "bdev_ftl_set_property", 00:05:55.230 "bdev_ftl_get_properties", 00:05:55.230 "bdev_ftl_get_stats", 00:05:55.230 "bdev_ftl_unmap", 00:05:55.230 "bdev_ftl_unload", 00:05:55.230 "bdev_ftl_delete", 00:05:55.230 "bdev_ftl_load", 00:05:55.230 "bdev_ftl_create", 00:05:55.230 "bdev_virtio_attach_controller", 00:05:55.230 "bdev_virtio_scsi_get_devices", 00:05:55.230 "bdev_virtio_detach_controller", 00:05:55.230 "bdev_virtio_blk_set_hotplug", 00:05:55.230 "bdev_iscsi_delete", 00:05:55.230 "bdev_iscsi_create", 00:05:55.230 "bdev_iscsi_set_options", 00:05:55.230 "accel_error_inject_error", 00:05:55.230 "ioat_scan_accel_module", 00:05:55.230 "dsa_scan_accel_module", 00:05:55.230 "iaa_scan_accel_module", 00:05:55.230 "iscsi_set_options", 00:05:55.230 "iscsi_get_auth_groups", 00:05:55.230 "iscsi_auth_group_remove_secret", 00:05:55.230 "iscsi_auth_group_add_secret", 00:05:55.230 "iscsi_delete_auth_group", 00:05:55.230 "iscsi_create_auth_group", 00:05:55.230 "iscsi_set_discovery_auth", 00:05:55.230 "iscsi_get_options", 00:05:55.230 "iscsi_target_node_request_logout", 00:05:55.230 "iscsi_target_node_set_redirect", 00:05:55.230 "iscsi_target_node_set_auth", 00:05:55.230 "iscsi_target_node_add_lun", 00:05:55.230 "iscsi_get_connections", 00:05:55.230 "iscsi_portal_group_set_auth", 00:05:55.230 "iscsi_start_portal_group", 00:05:55.230 "iscsi_delete_portal_group", 00:05:55.230 "iscsi_create_portal_group", 00:05:55.230 "iscsi_get_portal_groups", 00:05:55.230 "iscsi_delete_target_node", 00:05:55.230 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.230 "iscsi_target_node_add_pg_ig_maps", 00:05:55.230 "iscsi_create_target_node", 00:05:55.230 "iscsi_get_target_nodes", 00:05:55.230 "iscsi_delete_initiator_group", 00:05:55.230 "iscsi_initiator_group_remove_initiators", 00:05:55.230 "iscsi_initiator_group_add_initiators", 00:05:55.230 "iscsi_create_initiator_group", 00:05:55.230 "iscsi_get_initiator_groups", 00:05:55.230 "nvmf_set_crdt", 00:05:55.230 "nvmf_set_config", 00:05:55.230 "nvmf_set_max_subsystems", 00:05:55.230 "nvmf_subsystem_get_listeners", 00:05:55.230 "nvmf_subsystem_get_qpairs", 00:05:55.230 "nvmf_subsystem_get_controllers", 00:05:55.230 "nvmf_get_stats", 00:05:55.230 "nvmf_get_transports", 00:05:55.230 "nvmf_create_transport", 00:05:55.230 "nvmf_get_targets", 00:05:55.230 "nvmf_delete_target", 00:05:55.230 "nvmf_create_target", 00:05:55.230 "nvmf_subsystem_allow_any_host", 00:05:55.230 "nvmf_subsystem_remove_host", 00:05:55.230 "nvmf_subsystem_add_host", 00:05:55.230 "nvmf_subsystem_remove_ns", 00:05:55.230 "nvmf_subsystem_add_ns", 00:05:55.230 "nvmf_subsystem_listener_set_ana_state", 00:05:55.230 "nvmf_discovery_get_referrals", 00:05:55.230 "nvmf_discovery_remove_referral", 00:05:55.230 "nvmf_discovery_add_referral", 00:05:55.230 "nvmf_subsystem_remove_listener", 00:05:55.230 "nvmf_subsystem_add_listener", 00:05:55.230 "nvmf_delete_subsystem", 00:05:55.230 "nvmf_create_subsystem", 00:05:55.230 "nvmf_get_subsystems", 00:05:55.230 "env_dpdk_get_mem_stats", 00:05:55.230 "nbd_get_disks", 00:05:55.230 "nbd_stop_disk", 00:05:55.230 "nbd_start_disk", 00:05:55.230 "ublk_recover_disk", 00:05:55.230 "ublk_get_disks", 00:05:55.230 "ublk_stop_disk", 00:05:55.230 "ublk_start_disk", 00:05:55.230 "ublk_destroy_target", 00:05:55.230 "ublk_create_target", 00:05:55.230 "virtio_blk_create_transport", 00:05:55.230 "virtio_blk_get_transports", 00:05:55.230 "vhost_controller_set_coalescing", 00:05:55.230 "vhost_get_controllers", 00:05:55.230 "vhost_delete_controller", 00:05:55.230 "vhost_create_blk_controller", 00:05:55.230 "vhost_scsi_controller_remove_target", 00:05:55.230 "vhost_scsi_controller_add_target", 00:05:55.230 "vhost_start_scsi_controller", 00:05:55.230 "vhost_create_scsi_controller", 00:05:55.230 "thread_set_cpumask", 00:05:55.230 "framework_get_scheduler", 00:05:55.230 "framework_set_scheduler", 00:05:55.230 "framework_get_reactors", 00:05:55.230 "thread_get_io_channels", 00:05:55.230 "thread_get_pollers", 00:05:55.230 "thread_get_stats", 00:05:55.230 "framework_monitor_context_switch", 00:05:55.230 "spdk_kill_instance", 00:05:55.230 "log_enable_timestamps", 00:05:55.230 "log_get_flags", 00:05:55.230 "log_clear_flag", 00:05:55.230 "log_set_flag", 00:05:55.230 "log_get_level", 00:05:55.230 "log_set_level", 00:05:55.230 "log_get_print_level", 00:05:55.230 "log_set_print_level", 00:05:55.230 "framework_enable_cpumask_locks", 00:05:55.230 "framework_disable_cpumask_locks", 00:05:55.230 "framework_wait_init", 00:05:55.230 "framework_start_init", 00:05:55.230 "scsi_get_devices", 00:05:55.230 "bdev_get_histogram", 00:05:55.230 "bdev_enable_histogram", 00:05:55.230 "bdev_set_qos_limit", 00:05:55.230 "bdev_set_qd_sampling_period", 00:05:55.230 "bdev_get_bdevs", 00:05:55.230 "bdev_reset_iostat", 00:05:55.230 "bdev_get_iostat", 00:05:55.230 "bdev_examine", 00:05:55.230 "bdev_wait_for_examine", 00:05:55.230 "bdev_set_options", 00:05:55.230 "notify_get_notifications", 00:05:55.230 "notify_get_types", 00:05:55.230 "accel_get_stats", 00:05:55.230 "accel_set_options", 00:05:55.230 "accel_set_driver", 00:05:55.230 "accel_crypto_key_destroy", 00:05:55.230 "accel_crypto_keys_get", 00:05:55.230 "accel_crypto_key_create", 00:05:55.230 "accel_assign_opc", 00:05:55.230 "accel_get_module_info", 00:05:55.231 "accel_get_opc_assignments", 00:05:55.231 "vmd_rescan", 00:05:55.231 "vmd_remove_device", 00:05:55.231 "vmd_enable", 00:05:55.231 "sock_set_default_impl", 00:05:55.231 "sock_impl_set_options", 00:05:55.231 "sock_impl_get_options", 00:05:55.231 "iobuf_get_stats", 00:05:55.231 "iobuf_set_options", 00:05:55.231 "framework_get_pci_devices", 00:05:55.231 "framework_get_config", 00:05:55.231 "framework_get_subsystems", 00:05:55.231 "trace_get_info", 00:05:55.231 "trace_get_tpoint_group_mask", 00:05:55.231 "trace_disable_tpoint_group", 00:05:55.231 "trace_enable_tpoint_group", 00:05:55.231 "trace_clear_tpoint_mask", 00:05:55.231 "trace_set_tpoint_mask", 00:05:55.231 "spdk_get_version", 00:05:55.231 "rpc_get_methods" 00:05:55.231 ] 00:05:55.231 17:42:59 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.231 17:42:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:55.231 17:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:55.231 17:42:59 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.231 17:42:59 -- spdkcli/tcp.sh@38 -- # killprocess 1484682 00:05:55.231 17:42:59 -- common/autotest_common.sh@926 -- # '[' -z 1484682 ']' 00:05:55.231 17:42:59 -- common/autotest_common.sh@930 -- # kill -0 1484682 00:05:55.231 17:42:59 -- common/autotest_common.sh@931 -- # uname 00:05:55.231 17:42:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.231 17:42:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1484682 00:05:55.491 17:42:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.491 17:42:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.491 17:42:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1484682' 00:05:55.491 killing process with pid 1484682 00:05:55.491 17:42:59 -- common/autotest_common.sh@945 -- # kill 1484682 00:05:55.491 17:42:59 -- common/autotest_common.sh@950 -- # wait 1484682 00:05:55.491 00:05:55.491 real 0m1.505s 00:05:55.491 user 0m2.881s 00:05:55.491 sys 0m0.431s 00:05:55.491 17:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.491 17:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 ************************************ 00:05:55.491 END TEST spdkcli_tcp 00:05:55.491 ************************************ 00:05:55.491 17:42:59 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.491 17:42:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.491 17:42:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.491 17:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:55.491 ************************************ 00:05:55.491 START TEST dpdk_mem_utility 00:05:55.491 ************************************ 00:05:55.491 17:42:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.753 * Looking for test storage... 00:05:55.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:55.753 17:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:55.753 17:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1484962 00:05:55.753 17:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1484962 00:05:55.753 17:42:59 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.753 17:42:59 -- common/autotest_common.sh@819 -- # '[' -z 1484962 ']' 00:05:55.753 17:42:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.753 17:42:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.753 17:42:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.753 17:42:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.753 17:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:55.753 [2024-07-22 17:42:59.950168] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.753 [2024-07-22 17:42:59.950313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1484962 ] 00:05:56.014 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.014 [2024-07-22 17:43:00.087964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.014 [2024-07-22 17:43:00.160860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.014 [2024-07-22 17:43:00.160991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.585 17:43:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.585 17:43:00 -- common/autotest_common.sh@852 -- # return 0 00:05:56.585 17:43:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:56.585 17:43:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:56.585 17:43:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.585 17:43:00 -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 { 00:05:56.585 "filename": "/tmp/spdk_mem_dump.txt" 00:05:56.585 } 00:05:56.585 17:43:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.585 17:43:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:56.585 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:56.585 1 heaps totaling size 814.000000 MiB 00:05:56.585 size: 814.000000 MiB heap id: 0 00:05:56.585 end heaps---------- 00:05:56.585 8 mempools totaling size 598.116089 MiB 00:05:56.585 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:56.586 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:56.586 size: 84.521057 MiB name: bdev_io_1484962 00:05:56.586 size: 51.011292 MiB name: evtpool_1484962 00:05:56.586 size: 50.003479 MiB name: msgpool_1484962 00:05:56.586 size: 21.763794 MiB name: PDU_Pool 00:05:56.586 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:56.586 size: 0.026123 MiB name: Session_Pool 00:05:56.586 end mempools------- 00:05:56.586 6 memzones totaling size 4.142822 MiB 00:05:56.586 size: 1.000366 MiB name: RG_ring_0_1484962 00:05:56.586 size: 1.000366 MiB name: RG_ring_1_1484962 00:05:56.586 size: 1.000366 MiB name: RG_ring_4_1484962 00:05:56.586 size: 1.000366 MiB name: RG_ring_5_1484962 00:05:56.586 size: 0.125366 MiB name: RG_ring_2_1484962 00:05:56.586 size: 0.015991 MiB name: RG_ring_3_1484962 00:05:56.586 end memzones------- 00:05:56.586 17:43:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:56.586 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:56.586 list of free elements. size: 12.519348 MiB 00:05:56.586 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:56.586 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:56.586 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:56.586 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:56.586 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:56.586 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:56.586 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:56.586 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:56.586 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:56.586 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:56.586 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:56.586 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:56.586 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:56.586 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:56.586 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:56.586 list of standard malloc elements. size: 199.218079 MiB 00:05:56.586 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:56.586 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:56.586 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:56.586 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:56.586 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:56.586 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:56.586 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:56.586 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:56.586 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:56.586 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:56.586 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:56.586 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:56.586 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:56.586 list of memzone associated elements. size: 602.262573 MiB 00:05:56.586 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:56.586 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:56.586 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:56.586 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:56.586 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:56.586 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1484962_0 00:05:56.586 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:56.586 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1484962_0 00:05:56.586 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:56.586 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1484962_0 00:05:56.586 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:56.586 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:56.586 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:56.586 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:56.586 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:56.586 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1484962 00:05:56.586 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:56.586 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1484962 00:05:56.586 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:56.586 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1484962 00:05:56.586 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:56.586 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:56.586 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:56.586 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:56.586 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:56.586 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:56.586 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:56.586 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:56.586 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:56.586 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1484962 00:05:56.586 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:56.586 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1484962 00:05:56.586 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:56.586 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1484962 00:05:56.586 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:56.586 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1484962 00:05:56.586 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:56.586 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1484962 00:05:56.586 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:56.586 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:56.586 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:56.586 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:56.586 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:56.586 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:56.586 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:56.586 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1484962 00:05:56.586 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:56.587 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:56.587 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:56.587 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:56.587 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:56.587 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1484962 00:05:56.587 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:56.587 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:56.587 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:56.587 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1484962 00:05:56.587 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:56.587 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1484962 00:05:56.587 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:56.587 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:56.587 17:43:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:56.587 17:43:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1484962 00:05:56.587 17:43:00 -- common/autotest_common.sh@926 -- # '[' -z 1484962 ']' 00:05:56.587 17:43:00 -- common/autotest_common.sh@930 -- # kill -0 1484962 00:05:56.587 17:43:00 -- common/autotest_common.sh@931 -- # uname 00:05:56.587 17:43:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.587 17:43:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1484962 00:05:56.848 17:43:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.848 17:43:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.848 17:43:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1484962' 00:05:56.848 killing process with pid 1484962 00:05:56.848 17:43:00 -- common/autotest_common.sh@945 -- # kill 1484962 00:05:56.848 17:43:00 -- common/autotest_common.sh@950 -- # wait 1484962 00:05:56.848 00:05:56.848 real 0m1.333s 00:05:56.848 user 0m1.393s 00:05:56.848 sys 0m0.444s 00:05:56.848 17:43:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.848 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.848 ************************************ 00:05:56.848 END TEST dpdk_mem_utility 00:05:56.848 ************************************ 00:05:57.109 17:43:01 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:57.109 17:43:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.109 17:43:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.109 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.109 ************************************ 00:05:57.109 START TEST event 00:05:57.109 ************************************ 00:05:57.109 17:43:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:57.109 * Looking for test storage... 00:05:57.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:57.109 17:43:01 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:57.109 17:43:01 -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.109 17:43:01 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.109 17:43:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:57.109 17:43:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.109 17:43:01 -- common/autotest_common.sh@10 -- # set +x 00:05:57.109 ************************************ 00:05:57.109 START TEST event_perf 00:05:57.109 ************************************ 00:05:57.109 17:43:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.109 Running I/O for 1 seconds...[2024-07-22 17:43:01.267560] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.109 [2024-07-22 17:43:01.267683] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485147 ] 00:05:57.109 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.109 [2024-07-22 17:43:01.358446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.369 [2024-07-22 17:43:01.435836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.369 [2024-07-22 17:43:01.435982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.369 [2024-07-22 17:43:01.436128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.369 [2024-07-22 17:43:01.436128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.310 Running I/O for 1 seconds... 00:05:58.310 lcore 0: 76024 00:05:58.310 lcore 1: 76027 00:05:58.310 lcore 2: 76031 00:05:58.310 lcore 3: 76028 00:05:58.310 done. 00:05:58.310 00:05:58.310 real 0m1.241s 00:05:58.310 user 0m4.140s 00:05:58.310 sys 0m0.094s 00:05:58.310 17:43:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.310 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.310 ************************************ 00:05:58.310 END TEST event_perf 00:05:58.310 ************************************ 00:05:58.310 17:43:02 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:58.310 17:43:02 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:58.310 17:43:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.310 17:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:58.310 ************************************ 00:05:58.310 START TEST event_reactor 00:05:58.310 ************************************ 00:05:58.310 17:43:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:58.310 [2024-07-22 17:43:02.550999] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:58.310 [2024-07-22 17:43:02.551101] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485451 ] 00:05:58.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.570 [2024-07-22 17:43:02.637856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.570 [2024-07-22 17:43:02.700014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.511 test_start 00:05:59.511 oneshot 00:05:59.511 tick 100 00:05:59.511 tick 100 00:05:59.511 tick 250 00:05:59.511 tick 100 00:05:59.511 tick 100 00:05:59.511 tick 100 00:05:59.511 tick 250 00:05:59.511 tick 500 00:05:59.511 tick 100 00:05:59.511 tick 100 00:05:59.511 tick 250 00:05:59.511 tick 100 00:05:59.511 tick 100 00:05:59.511 test_end 00:05:59.511 00:05:59.511 real 0m1.219s 00:05:59.511 user 0m1.134s 00:05:59.511 sys 0m0.081s 00:05:59.511 17:43:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.511 17:43:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.511 ************************************ 00:05:59.511 END TEST event_reactor 00:05:59.511 ************************************ 00:05:59.511 17:43:03 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.511 17:43:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:59.511 17:43:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.511 17:43:03 -- common/autotest_common.sh@10 -- # set +x 00:05:59.772 ************************************ 00:05:59.772 START TEST event_reactor_perf 00:05:59.772 ************************************ 00:05:59.772 17:43:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.772 [2024-07-22 17:43:03.814157] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:59.772 [2024-07-22 17:43:03.814256] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485769 ] 00:05:59.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.772 [2024-07-22 17:43:03.903761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.772 [2024-07-22 17:43:03.970075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.155 test_start 00:06:01.155 test_end 00:06:01.155 Performance: 398147 events per second 00:06:01.155 00:06:01.155 real 0m1.226s 00:06:01.155 user 0m1.127s 00:06:01.155 sys 0m0.095s 00:06:01.155 17:43:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.155 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.155 ************************************ 00:06:01.155 END TEST event_reactor_perf 00:06:01.155 ************************************ 00:06:01.155 17:43:05 -- event/event.sh@49 -- # uname -s 00:06:01.155 17:43:05 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.155 17:43:05 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.155 17:43:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.155 17:43:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.155 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.155 ************************************ 00:06:01.155 START TEST event_scheduler 00:06:01.155 ************************************ 00:06:01.155 17:43:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:01.155 * Looking for test storage... 00:06:01.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:01.155 17:43:05 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:01.155 17:43:05 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1485915 00:06:01.155 17:43:05 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.155 17:43:05 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:01.155 17:43:05 -- scheduler/scheduler.sh@37 -- # waitforlisten 1485915 00:06:01.155 17:43:05 -- common/autotest_common.sh@819 -- # '[' -z 1485915 ']' 00:06:01.156 17:43:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.156 17:43:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.156 17:43:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.156 17:43:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.156 17:43:05 -- common/autotest_common.sh@10 -- # set +x 00:06:01.156 [2024-07-22 17:43:05.211994] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.156 [2024-07-22 17:43:05.212071] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485915 ] 00:06:01.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.156 [2024-07-22 17:43:05.344981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.417 [2024-07-22 17:43:05.508312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.417 [2024-07-22 17:43:05.508451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.417 [2024-07-22 17:43:05.508505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.417 [2024-07-22 17:43:05.508514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.988 17:43:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.988 17:43:06 -- common/autotest_common.sh@852 -- # return 0 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 POWER: Env isn't set yet! 00:06:01.988 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:01.988 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.988 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.988 POWER: Attempting to initialise PSTAT power management... 00:06:01.988 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:01.988 POWER: Initialized successfully for lcore 0 power management 00:06:01.988 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:01.988 POWER: Initialized successfully for lcore 1 power management 00:06:01.988 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:01.988 POWER: Initialized successfully for lcore 2 power management 00:06:01.988 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:01.988 POWER: Initialized successfully for lcore 3 power management 00:06:01.988 [2024-07-22 17:43:06.103811] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:01.988 [2024-07-22 17:43:06.103822] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:01.988 [2024-07-22 17:43:06.103828] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 [2024-07-22 17:43:06.158995] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:01.988 17:43:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.988 17:43:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 ************************************ 00:06:01.988 START TEST scheduler_create_thread 00:06:01.988 ************************************ 00:06:01.988 17:43:06 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 2 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 3 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 4 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 5 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 6 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.988 17:43:06 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:01.988 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.988 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:01.988 7 00:06:01.988 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.249 17:43:06 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.249 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.249 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:02.249 8 00:06:02.249 17:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.249 17:43:06 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.249 17:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.249 17:43:06 -- common/autotest_common.sh@10 -- # set +x 00:06:03.711 9 00:06:03.711 17:43:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.711 17:43:07 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:03.711 17:43:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.711 17:43:07 -- common/autotest_common.sh@10 -- # set +x 00:06:04.681 10 00:06:04.681 17:43:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.681 17:43:08 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.681 17:43:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.681 17:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:05.285 17:43:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.286 17:43:09 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.286 17:43:09 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.286 17:43:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.286 17:43:09 -- common/autotest_common.sh@10 -- # set +x 00:06:05.856 17:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.856 17:43:10 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:05.856 17:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.856 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:06.427 17:43:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.427 17:43:10 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.427 17:43:10 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.427 17:43:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:06.427 17:43:10 -- common/autotest_common.sh@10 -- # set +x 00:06:06.998 17:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.998 00:06:06.998 real 0m4.968s 00:06:06.998 user 0m0.025s 00:06:06.998 sys 0m0.004s 00:06:06.998 17:43:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.998 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:06.998 ************************************ 00:06:06.998 END TEST scheduler_create_thread 00:06:06.998 ************************************ 00:06:06.998 17:43:11 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:06.998 17:43:11 -- scheduler/scheduler.sh@46 -- # killprocess 1485915 00:06:06.998 17:43:11 -- common/autotest_common.sh@926 -- # '[' -z 1485915 ']' 00:06:06.998 17:43:11 -- common/autotest_common.sh@930 -- # kill -0 1485915 00:06:06.998 17:43:11 -- common/autotest_common.sh@931 -- # uname 00:06:06.998 17:43:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.998 17:43:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1485915 00:06:06.998 17:43:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:06.998 17:43:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:06.998 17:43:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1485915' 00:06:06.998 killing process with pid 1485915 00:06:06.998 17:43:11 -- common/autotest_common.sh@945 -- # kill 1485915 00:06:06.998 17:43:11 -- common/autotest_common.sh@950 -- # wait 1485915 00:06:07.258 [2024-07-22 17:43:11.365488] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.258 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:07.258 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:07.258 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:07.258 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:07.258 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:07.258 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:07.258 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:07.258 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:07.518 00:06:07.518 real 0m6.501s 00:06:07.518 user 0m14.800s 00:06:07.518 sys 0m0.399s 00:06:07.518 17:43:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.518 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 ************************************ 00:06:07.518 END TEST event_scheduler 00:06:07.518 ************************************ 00:06:07.518 17:43:11 -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.518 17:43:11 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.518 17:43:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.518 17:43:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.518 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:07.518 ************************************ 00:06:07.518 START TEST app_repeat 00:06:07.518 ************************************ 00:06:07.518 17:43:11 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:07.518 17:43:11 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.518 17:43:11 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.518 17:43:11 -- event/event.sh@13 -- # local nbd_list 00:06:07.519 17:43:11 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.519 17:43:11 -- event/event.sh@14 -- # local bdev_list 00:06:07.519 17:43:11 -- event/event.sh@15 -- # local repeat_times=4 00:06:07.519 17:43:11 -- event/event.sh@17 -- # modprobe nbd 00:06:07.519 17:43:11 -- event/event.sh@19 -- # repeat_pid=1487116 00:06:07.519 17:43:11 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.519 17:43:11 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.519 17:43:11 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1487116' 00:06:07.519 Process app_repeat pid: 1487116 00:06:07.519 17:43:11 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.519 17:43:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.519 spdk_app_start Round 0 00:06:07.519 17:43:11 -- event/event.sh@25 -- # waitforlisten 1487116 /var/tmp/spdk-nbd.sock 00:06:07.519 17:43:11 -- common/autotest_common.sh@819 -- # '[' -z 1487116 ']' 00:06:07.519 17:43:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.519 17:43:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.519 17:43:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.519 17:43:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.519 17:43:11 -- common/autotest_common.sh@10 -- # set +x 00:06:07.519 [2024-07-22 17:43:11.657139] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:07.519 [2024-07-22 17:43:11.657253] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1487116 ] 00:06:07.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.519 [2024-07-22 17:43:11.750401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.779 [2024-07-22 17:43:11.811037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.779 [2024-07-22 17:43:11.811043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.351 17:43:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.351 17:43:12 -- common/autotest_common.sh@852 -- # return 0 00:06:08.351 17:43:12 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.612 Malloc0 00:06:08.612 17:43:12 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.612 Malloc1 00:06:08.612 17:43:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.612 17:43:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.871 /dev/nbd0 00:06:08.871 17:43:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.871 17:43:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.871 17:43:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:08.871 17:43:13 -- common/autotest_common.sh@857 -- # local i 00:06:08.871 17:43:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.871 17:43:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.871 17:43:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:08.871 17:43:13 -- common/autotest_common.sh@861 -- # break 00:06:08.871 17:43:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.871 17:43:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.871 17:43:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.871 1+0 records in 00:06:08.871 1+0 records out 00:06:08.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287347 s, 14.3 MB/s 00:06:08.871 17:43:13 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.871 17:43:13 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.871 17:43:13 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.871 17:43:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.871 17:43:13 -- common/autotest_common.sh@877 -- # return 0 00:06:08.871 17:43:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.871 17:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.871 17:43:13 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.131 /dev/nbd1 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.131 17:43:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:09.131 17:43:13 -- common/autotest_common.sh@857 -- # local i 00:06:09.131 17:43:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.131 17:43:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.131 17:43:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:09.131 17:43:13 -- common/autotest_common.sh@861 -- # break 00:06:09.131 17:43:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.131 17:43:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.131 17:43:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.131 1+0 records in 00:06:09.131 1+0 records out 00:06:09.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000132021 s, 31.0 MB/s 00:06:09.131 17:43:13 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.131 17:43:13 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.131 17:43:13 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:09.131 17:43:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.131 17:43:13 -- common/autotest_common.sh@877 -- # return 0 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.131 17:43:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.392 { 00:06:09.392 "nbd_device": "/dev/nbd0", 00:06:09.392 "bdev_name": "Malloc0" 00:06:09.392 }, 00:06:09.392 { 00:06:09.392 "nbd_device": "/dev/nbd1", 00:06:09.392 "bdev_name": "Malloc1" 00:06:09.392 } 00:06:09.392 ]' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.392 { 00:06:09.392 "nbd_device": "/dev/nbd0", 00:06:09.392 "bdev_name": "Malloc0" 00:06:09.392 }, 00:06:09.392 { 00:06:09.392 "nbd_device": "/dev/nbd1", 00:06:09.392 "bdev_name": "Malloc1" 00:06:09.392 } 00:06:09.392 ]' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.392 /dev/nbd1' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.392 /dev/nbd1' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.392 256+0 records in 00:06:09.392 256+0 records out 00:06:09.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125796 s, 83.4 MB/s 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.392 256+0 records in 00:06:09.392 256+0 records out 00:06:09.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150064 s, 69.9 MB/s 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.392 256+0 records in 00:06:09.392 256+0 records out 00:06:09.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154243 s, 68.0 MB/s 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.392 17:43:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@41 -- # break 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.652 17:43:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@41 -- # break 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.912 17:43:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.912 17:43:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.912 17:43:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.912 17:43:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@65 -- # true 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.172 17:43:14 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.172 17:43:14 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.172 17:43:14 -- event/event.sh@35 -- # sleep 3 00:06:10.433 [2024-07-22 17:43:14.556148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.433 [2024-07-22 17:43:14.615953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.433 [2024-07-22 17:43:14.615957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.433 [2024-07-22 17:43:14.646279] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.433 [2024-07-22 17:43:14.646314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.732 17:43:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:13.732 17:43:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.732 spdk_app_start Round 1 00:06:13.732 17:43:17 -- event/event.sh@25 -- # waitforlisten 1487116 /var/tmp/spdk-nbd.sock 00:06:13.732 17:43:17 -- common/autotest_common.sh@819 -- # '[' -z 1487116 ']' 00:06:13.732 17:43:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.732 17:43:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.732 17:43:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.732 17:43:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.732 17:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 17:43:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.732 17:43:17 -- common/autotest_common.sh@852 -- # return 0 00:06:13.732 17:43:17 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.732 Malloc0 00:06:13.732 17:43:17 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.732 Malloc1 00:06:13.732 17:43:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@12 -- # local i 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.732 17:43:17 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.993 /dev/nbd0 00:06:13.993 17:43:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.993 17:43:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.993 17:43:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:13.993 17:43:18 -- common/autotest_common.sh@857 -- # local i 00:06:13.993 17:43:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:13.993 17:43:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:13.993 17:43:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:13.993 17:43:18 -- common/autotest_common.sh@861 -- # break 00:06:13.993 17:43:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:13.993 17:43:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:13.993 17:43:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.993 1+0 records in 00:06:13.993 1+0 records out 00:06:13.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206186 s, 19.9 MB/s 00:06:13.993 17:43:18 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.993 17:43:18 -- common/autotest_common.sh@874 -- # size=4096 00:06:13.993 17:43:18 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.993 17:43:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:13.993 17:43:18 -- common/autotest_common.sh@877 -- # return 0 00:06:13.993 17:43:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.993 17:43:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.993 17:43:18 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.253 /dev/nbd1 00:06:14.253 17:43:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.253 17:43:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.254 17:43:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:14.254 17:43:18 -- common/autotest_common.sh@857 -- # local i 00:06:14.254 17:43:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.254 17:43:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.254 17:43:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:14.254 17:43:18 -- common/autotest_common.sh@861 -- # break 00:06:14.254 17:43:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.254 17:43:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.254 17:43:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.254 1+0 records in 00:06:14.254 1+0 records out 00:06:14.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265621 s, 15.4 MB/s 00:06:14.254 17:43:18 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.254 17:43:18 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.254 17:43:18 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:14.254 17:43:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.254 17:43:18 -- common/autotest_common.sh@877 -- # return 0 00:06:14.254 17:43:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.254 17:43:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.254 17:43:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.254 17:43:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.254 17:43:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.514 { 00:06:14.514 "nbd_device": "/dev/nbd0", 00:06:14.514 "bdev_name": "Malloc0" 00:06:14.514 }, 00:06:14.514 { 00:06:14.514 "nbd_device": "/dev/nbd1", 00:06:14.514 "bdev_name": "Malloc1" 00:06:14.514 } 00:06:14.514 ]' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.514 { 00:06:14.514 "nbd_device": "/dev/nbd0", 00:06:14.514 "bdev_name": "Malloc0" 00:06:14.514 }, 00:06:14.514 { 00:06:14.514 "nbd_device": "/dev/nbd1", 00:06:14.514 "bdev_name": "Malloc1" 00:06:14.514 } 00:06:14.514 ]' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.514 /dev/nbd1' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.514 /dev/nbd1' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.514 256+0 records in 00:06:14.514 256+0 records out 00:06:14.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119756 s, 87.6 MB/s 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.514 256+0 records in 00:06:14.514 256+0 records out 00:06:14.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146728 s, 71.5 MB/s 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.514 256+0 records in 00:06:14.514 256+0 records out 00:06:14.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159194 s, 65.9 MB/s 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.514 17:43:18 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@41 -- # break 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.774 17:43:18 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@41 -- # break 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.035 17:43:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@65 -- # true 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.295 17:43:19 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.295 17:43:19 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.295 17:43:19 -- event/event.sh@35 -- # sleep 3 00:06:15.555 [2024-07-22 17:43:19.672805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.555 [2024-07-22 17:43:19.731457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.556 [2024-07-22 17:43:19.731461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.556 [2024-07-22 17:43:19.761792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.556 [2024-07-22 17:43:19.761835] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.853 17:43:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:18.853 17:43:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.853 spdk_app_start Round 2 00:06:18.853 17:43:22 -- event/event.sh@25 -- # waitforlisten 1487116 /var/tmp/spdk-nbd.sock 00:06:18.853 17:43:22 -- common/autotest_common.sh@819 -- # '[' -z 1487116 ']' 00:06:18.853 17:43:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.853 17:43:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.853 17:43:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.853 17:43:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.853 17:43:22 -- common/autotest_common.sh@10 -- # set +x 00:06:18.853 17:43:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.853 17:43:22 -- common/autotest_common.sh@852 -- # return 0 00:06:18.853 17:43:22 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.853 Malloc0 00:06:18.853 17:43:22 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.853 Malloc1 00:06:18.853 17:43:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@12 -- # local i 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.853 17:43:23 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.114 /dev/nbd0 00:06:19.114 17:43:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.114 17:43:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.114 17:43:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:19.114 17:43:23 -- common/autotest_common.sh@857 -- # local i 00:06:19.114 17:43:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.114 17:43:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.114 17:43:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:19.114 17:43:23 -- common/autotest_common.sh@861 -- # break 00:06:19.114 17:43:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.114 17:43:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.114 17:43:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.114 1+0 records in 00:06:19.114 1+0 records out 00:06:19.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00011708 s, 35.0 MB/s 00:06:19.114 17:43:23 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.114 17:43:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.114 17:43:23 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.114 17:43:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.114 17:43:23 -- common/autotest_common.sh@877 -- # return 0 00:06:19.114 17:43:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.114 17:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.114 17:43:23 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.375 /dev/nbd1 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.375 17:43:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:19.375 17:43:23 -- common/autotest_common.sh@857 -- # local i 00:06:19.375 17:43:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.375 17:43:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.375 17:43:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:19.375 17:43:23 -- common/autotest_common.sh@861 -- # break 00:06:19.375 17:43:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.375 17:43:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.375 17:43:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.375 1+0 records in 00:06:19.375 1+0 records out 00:06:19.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240845 s, 17.0 MB/s 00:06:19.375 17:43:23 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.375 17:43:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.375 17:43:23 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.375 17:43:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.375 17:43:23 -- common/autotest_common.sh@877 -- # return 0 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.375 17:43:23 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.635 { 00:06:19.635 "nbd_device": "/dev/nbd0", 00:06:19.635 "bdev_name": "Malloc0" 00:06:19.635 }, 00:06:19.635 { 00:06:19.635 "nbd_device": "/dev/nbd1", 00:06:19.635 "bdev_name": "Malloc1" 00:06:19.635 } 00:06:19.635 ]' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.635 { 00:06:19.635 "nbd_device": "/dev/nbd0", 00:06:19.635 "bdev_name": "Malloc0" 00:06:19.635 }, 00:06:19.635 { 00:06:19.635 "nbd_device": "/dev/nbd1", 00:06:19.635 "bdev_name": "Malloc1" 00:06:19.635 } 00:06:19.635 ]' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.635 /dev/nbd1' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.635 /dev/nbd1' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.635 17:43:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.635 256+0 records in 00:06:19.635 256+0 records out 00:06:19.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125648 s, 83.5 MB/s 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.636 256+0 records in 00:06:19.636 256+0 records out 00:06:19.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142153 s, 73.8 MB/s 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.636 256+0 records in 00:06:19.636 256+0 records out 00:06:19.636 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252169 s, 41.6 MB/s 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@51 -- # local i 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.636 17:43:23 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@41 -- # break 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.897 17:43:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@41 -- # break 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.157 17:43:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@65 -- # true 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.417 17:43:24 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.417 17:43:24 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.417 17:43:24 -- event/event.sh@35 -- # sleep 3 00:06:20.678 [2024-07-22 17:43:24.796039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.678 [2024-07-22 17:43:24.855252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.678 [2024-07-22 17:43:24.855258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.678 [2024-07-22 17:43:24.885605] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.678 [2024-07-22 17:43:24.885643] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.975 17:43:27 -- event/event.sh@38 -- # waitforlisten 1487116 /var/tmp/spdk-nbd.sock 00:06:23.975 17:43:27 -- common/autotest_common.sh@819 -- # '[' -z 1487116 ']' 00:06:23.975 17:43:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.976 17:43:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.976 17:43:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.976 17:43:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.976 17:43:27 -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 17:43:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.976 17:43:27 -- common/autotest_common.sh@852 -- # return 0 00:06:23.976 17:43:27 -- event/event.sh@39 -- # killprocess 1487116 00:06:23.976 17:43:27 -- common/autotest_common.sh@926 -- # '[' -z 1487116 ']' 00:06:23.976 17:43:27 -- common/autotest_common.sh@930 -- # kill -0 1487116 00:06:23.976 17:43:27 -- common/autotest_common.sh@931 -- # uname 00:06:23.976 17:43:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.976 17:43:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1487116 00:06:23.976 17:43:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.976 17:43:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.976 17:43:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1487116' 00:06:23.976 killing process with pid 1487116 00:06:23.976 17:43:27 -- common/autotest_common.sh@945 -- # kill 1487116 00:06:23.976 17:43:27 -- common/autotest_common.sh@950 -- # wait 1487116 00:06:23.976 spdk_app_start is called in Round 0. 00:06:23.976 Shutdown signal received, stop current app iteration 00:06:23.976 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:23.976 spdk_app_start is called in Round 1. 00:06:23.976 Shutdown signal received, stop current app iteration 00:06:23.976 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:23.976 spdk_app_start is called in Round 2. 00:06:23.976 Shutdown signal received, stop current app iteration 00:06:23.976 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:23.976 spdk_app_start is called in Round 3. 00:06:23.976 Shutdown signal received, stop current app iteration 00:06:23.976 17:43:28 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:23.976 17:43:28 -- event/event.sh@42 -- # return 0 00:06:23.976 00:06:23.976 real 0m16.396s 00:06:23.976 user 0m36.089s 00:06:23.976 sys 0m2.199s 00:06:23.976 17:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.976 17:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 ************************************ 00:06:23.976 END TEST app_repeat 00:06:23.976 ************************************ 00:06:23.976 17:43:28 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:23.976 17:43:28 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:23.976 17:43:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.976 17:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.976 17:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 ************************************ 00:06:23.976 START TEST cpu_locks 00:06:23.976 ************************************ 00:06:23.976 17:43:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:23.976 * Looking for test storage... 00:06:23.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:23.976 17:43:28 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:23.976 17:43:28 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:23.976 17:43:28 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:23.976 17:43:28 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:23.976 17:43:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.976 17:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.976 17:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 ************************************ 00:06:23.976 START TEST default_locks 00:06:23.976 ************************************ 00:06:23.976 17:43:28 -- common/autotest_common.sh@1104 -- # default_locks 00:06:23.976 17:43:28 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1490126 00:06:23.976 17:43:28 -- event/cpu_locks.sh@47 -- # waitforlisten 1490126 00:06:23.976 17:43:28 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.976 17:43:28 -- common/autotest_common.sh@819 -- # '[' -z 1490126 ']' 00:06:23.976 17:43:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.976 17:43:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.976 17:43:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.976 17:43:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.976 17:43:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.976 [2024-07-22 17:43:28.215938] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:23.976 [2024-07-22 17:43:28.215999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490126 ] 00:06:23.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.237 [2024-07-22 17:43:28.298144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.237 [2024-07-22 17:43:28.360800] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.237 [2024-07-22 17:43:28.360920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.807 17:43:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.807 17:43:29 -- common/autotest_common.sh@852 -- # return 0 00:06:24.807 17:43:29 -- event/cpu_locks.sh@49 -- # locks_exist 1490126 00:06:24.807 17:43:29 -- event/cpu_locks.sh@22 -- # lslocks -p 1490126 00:06:24.807 17:43:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.377 lslocks: write error 00:06:25.377 17:43:29 -- event/cpu_locks.sh@50 -- # killprocess 1490126 00:06:25.377 17:43:29 -- common/autotest_common.sh@926 -- # '[' -z 1490126 ']' 00:06:25.377 17:43:29 -- common/autotest_common.sh@930 -- # kill -0 1490126 00:06:25.377 17:43:29 -- common/autotest_common.sh@931 -- # uname 00:06:25.377 17:43:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.377 17:43:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1490126 00:06:25.377 17:43:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.377 17:43:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.377 17:43:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1490126' 00:06:25.377 killing process with pid 1490126 00:06:25.377 17:43:29 -- common/autotest_common.sh@945 -- # kill 1490126 00:06:25.377 17:43:29 -- common/autotest_common.sh@950 -- # wait 1490126 00:06:25.637 17:43:29 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1490126 00:06:25.637 17:43:29 -- common/autotest_common.sh@640 -- # local es=0 00:06:25.637 17:43:29 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1490126 00:06:25.637 17:43:29 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:25.637 17:43:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.637 17:43:29 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:25.637 17:43:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:25.637 17:43:29 -- common/autotest_common.sh@643 -- # waitforlisten 1490126 00:06:25.637 17:43:29 -- common/autotest_common.sh@819 -- # '[' -z 1490126 ']' 00:06:25.637 17:43:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.637 17:43:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.637 17:43:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.638 17:43:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.638 17:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1490126) - No such process 00:06:25.638 ERROR: process (pid: 1490126) is no longer running 00:06:25.638 17:43:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.638 17:43:29 -- common/autotest_common.sh@852 -- # return 1 00:06:25.638 17:43:29 -- common/autotest_common.sh@643 -- # es=1 00:06:25.638 17:43:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:25.638 17:43:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:25.638 17:43:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:25.638 17:43:29 -- event/cpu_locks.sh@54 -- # no_locks 00:06:25.638 17:43:29 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.638 17:43:29 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.638 17:43:29 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.638 00:06:25.638 real 0m1.694s 00:06:25.638 user 0m1.837s 00:06:25.638 sys 0m0.565s 00:06:25.638 17:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.638 17:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.638 ************************************ 00:06:25.638 END TEST default_locks 00:06:25.638 ************************************ 00:06:25.638 17:43:29 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:25.638 17:43:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.638 17:43:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.638 17:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.638 ************************************ 00:06:25.638 START TEST default_locks_via_rpc 00:06:25.638 ************************************ 00:06:25.638 17:43:29 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:25.638 17:43:29 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1490463 00:06:25.638 17:43:29 -- event/cpu_locks.sh@63 -- # waitforlisten 1490463 00:06:25.638 17:43:29 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.638 17:43:29 -- common/autotest_common.sh@819 -- # '[' -z 1490463 ']' 00:06:25.638 17:43:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.638 17:43:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.638 17:43:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.638 17:43:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.638 17:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:25.899 [2024-07-22 17:43:29.964812] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:25.899 [2024-07-22 17:43:29.964870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490463 ] 00:06:25.899 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.899 [2024-07-22 17:43:30.049272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.899 [2024-07-22 17:43:30.113126] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.899 [2024-07-22 17:43:30.113257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.838 17:43:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.838 17:43:30 -- common/autotest_common.sh@852 -- # return 0 00:06:26.838 17:43:30 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:26.838 17:43:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:26.838 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:06:26.838 17:43:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.838 17:43:30 -- event/cpu_locks.sh@67 -- # no_locks 00:06:26.838 17:43:30 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.838 17:43:30 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.838 17:43:30 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.838 17:43:30 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.838 17:43:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:26.838 17:43:30 -- common/autotest_common.sh@10 -- # set +x 00:06:26.838 17:43:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:26.838 17:43:30 -- event/cpu_locks.sh@71 -- # locks_exist 1490463 00:06:26.838 17:43:30 -- event/cpu_locks.sh@22 -- # lslocks -p 1490463 00:06:26.838 17:43:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.838 17:43:31 -- event/cpu_locks.sh@73 -- # killprocess 1490463 00:06:26.838 17:43:31 -- common/autotest_common.sh@926 -- # '[' -z 1490463 ']' 00:06:26.838 17:43:31 -- common/autotest_common.sh@930 -- # kill -0 1490463 00:06:26.838 17:43:31 -- common/autotest_common.sh@931 -- # uname 00:06:26.838 17:43:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.838 17:43:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1490463 00:06:26.838 17:43:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.838 17:43:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.838 17:43:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1490463' 00:06:26.838 killing process with pid 1490463 00:06:26.838 17:43:31 -- common/autotest_common.sh@945 -- # kill 1490463 00:06:26.838 17:43:31 -- common/autotest_common.sh@950 -- # wait 1490463 00:06:27.099 00:06:27.099 real 0m1.388s 00:06:27.099 user 0m1.515s 00:06:27.099 sys 0m0.450s 00:06:27.099 17:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.099 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.099 ************************************ 00:06:27.099 END TEST default_locks_via_rpc 00:06:27.099 ************************************ 00:06:27.099 17:43:31 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.099 17:43:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.099 17:43:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.099 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.099 ************************************ 00:06:27.099 START TEST non_locking_app_on_locked_coremask 00:06:27.099 ************************************ 00:06:27.099 17:43:31 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:27.099 17:43:31 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1490794 00:06:27.099 17:43:31 -- event/cpu_locks.sh@81 -- # waitforlisten 1490794 /var/tmp/spdk.sock 00:06:27.099 17:43:31 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.099 17:43:31 -- common/autotest_common.sh@819 -- # '[' -z 1490794 ']' 00:06:27.099 17:43:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.099 17:43:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.099 17:43:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.099 17:43:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.099 17:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:27.359 [2024-07-22 17:43:31.394470] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:27.359 [2024-07-22 17:43:31.394525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490794 ] 00:06:27.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.359 [2024-07-22 17:43:31.474262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.359 [2024-07-22 17:43:31.535418] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.359 [2024-07-22 17:43:31.535550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.933 17:43:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.933 17:43:32 -- common/autotest_common.sh@852 -- # return 0 00:06:27.933 17:43:32 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.933 17:43:32 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1490831 00:06:27.933 17:43:32 -- event/cpu_locks.sh@85 -- # waitforlisten 1490831 /var/tmp/spdk2.sock 00:06:28.195 17:43:32 -- common/autotest_common.sh@819 -- # '[' -z 1490831 ']' 00:06:28.195 17:43:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.195 17:43:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.195 17:43:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.195 17:43:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.195 17:43:32 -- common/autotest_common.sh@10 -- # set +x 00:06:28.195 [2024-07-22 17:43:32.246153] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:28.195 [2024-07-22 17:43:32.246202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490831 ] 00:06:28.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.195 [2024-07-22 17:43:32.342925] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.195 [2024-07-22 17:43:32.342953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.195 [2024-07-22 17:43:32.463116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.195 [2024-07-22 17:43:32.463248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.136 17:43:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.136 17:43:33 -- common/autotest_common.sh@852 -- # return 0 00:06:29.136 17:43:33 -- event/cpu_locks.sh@87 -- # locks_exist 1490794 00:06:29.136 17:43:33 -- event/cpu_locks.sh@22 -- # lslocks -p 1490794 00:06:29.136 17:43:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.396 lslocks: write error 00:06:29.396 17:43:33 -- event/cpu_locks.sh@89 -- # killprocess 1490794 00:06:29.396 17:43:33 -- common/autotest_common.sh@926 -- # '[' -z 1490794 ']' 00:06:29.396 17:43:33 -- common/autotest_common.sh@930 -- # kill -0 1490794 00:06:29.396 17:43:33 -- common/autotest_common.sh@931 -- # uname 00:06:29.396 17:43:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.396 17:43:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1490794 00:06:29.396 17:43:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.397 17:43:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.397 17:43:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1490794' 00:06:29.397 killing process with pid 1490794 00:06:29.397 17:43:33 -- common/autotest_common.sh@945 -- # kill 1490794 00:06:29.397 17:43:33 -- common/autotest_common.sh@950 -- # wait 1490794 00:06:29.968 17:43:33 -- event/cpu_locks.sh@90 -- # killprocess 1490831 00:06:29.968 17:43:33 -- common/autotest_common.sh@926 -- # '[' -z 1490831 ']' 00:06:29.968 17:43:33 -- common/autotest_common.sh@930 -- # kill -0 1490831 00:06:29.968 17:43:33 -- common/autotest_common.sh@931 -- # uname 00:06:29.968 17:43:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.968 17:43:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1490831 00:06:29.968 17:43:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.968 17:43:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.968 17:43:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1490831' 00:06:29.968 killing process with pid 1490831 00:06:29.968 17:43:34 -- common/autotest_common.sh@945 -- # kill 1490831 00:06:29.968 17:43:34 -- common/autotest_common.sh@950 -- # wait 1490831 00:06:30.230 00:06:30.230 real 0m2.916s 00:06:30.230 user 0m3.261s 00:06:30.230 sys 0m0.863s 00:06:30.230 17:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.230 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:30.230 ************************************ 00:06:30.230 END TEST non_locking_app_on_locked_coremask 00:06:30.230 ************************************ 00:06:30.230 17:43:34 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:30.230 17:43:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.230 17:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.230 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:30.230 ************************************ 00:06:30.230 START TEST locking_app_on_unlocked_coremask 00:06:30.230 ************************************ 00:06:30.230 17:43:34 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:30.230 17:43:34 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1491217 00:06:30.230 17:43:34 -- event/cpu_locks.sh@99 -- # waitforlisten 1491217 /var/tmp/spdk.sock 00:06:30.230 17:43:34 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.230 17:43:34 -- common/autotest_common.sh@819 -- # '[' -z 1491217 ']' 00:06:30.230 17:43:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.230 17:43:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.230 17:43:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.230 17:43:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.230 17:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:30.230 [2024-07-22 17:43:34.345440] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:30.230 [2024-07-22 17:43:34.345495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491217 ] 00:06:30.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.230 [2024-07-22 17:43:34.427144] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.230 [2024-07-22 17:43:34.427179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.230 [2024-07-22 17:43:34.487097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.230 [2024-07-22 17:43:34.487232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.171 17:43:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.171 17:43:35 -- common/autotest_common.sh@852 -- # return 0 00:06:31.171 17:43:35 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.171 17:43:35 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1491458 00:06:31.171 17:43:35 -- event/cpu_locks.sh@103 -- # waitforlisten 1491458 /var/tmp/spdk2.sock 00:06:31.171 17:43:35 -- common/autotest_common.sh@819 -- # '[' -z 1491458 ']' 00:06:31.171 17:43:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.171 17:43:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.171 17:43:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.171 17:43:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.171 17:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:31.171 [2024-07-22 17:43:35.201487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:31.171 [2024-07-22 17:43:35.201548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491458 ] 00:06:31.171 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.171 [2024-07-22 17:43:35.303654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.171 [2024-07-22 17:43:35.423585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.171 [2024-07-22 17:43:35.423712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.112 17:43:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.112 17:43:36 -- common/autotest_common.sh@852 -- # return 0 00:06:32.112 17:43:36 -- event/cpu_locks.sh@105 -- # locks_exist 1491458 00:06:32.112 17:43:36 -- event/cpu_locks.sh@22 -- # lslocks -p 1491458 00:06:32.112 17:43:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.410 lslocks: write error 00:06:32.410 17:43:36 -- event/cpu_locks.sh@107 -- # killprocess 1491217 00:06:32.410 17:43:36 -- common/autotest_common.sh@926 -- # '[' -z 1491217 ']' 00:06:32.410 17:43:36 -- common/autotest_common.sh@930 -- # kill -0 1491217 00:06:32.410 17:43:36 -- common/autotest_common.sh@931 -- # uname 00:06:32.410 17:43:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.410 17:43:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1491217 00:06:32.410 17:43:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.410 17:43:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.410 17:43:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1491217' 00:06:32.410 killing process with pid 1491217 00:06:32.410 17:43:36 -- common/autotest_common.sh@945 -- # kill 1491217 00:06:32.410 17:43:36 -- common/autotest_common.sh@950 -- # wait 1491217 00:06:32.669 17:43:36 -- event/cpu_locks.sh@108 -- # killprocess 1491458 00:06:32.669 17:43:36 -- common/autotest_common.sh@926 -- # '[' -z 1491458 ']' 00:06:32.669 17:43:36 -- common/autotest_common.sh@930 -- # kill -0 1491458 00:06:32.669 17:43:36 -- common/autotest_common.sh@931 -- # uname 00:06:32.669 17:43:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.669 17:43:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1491458 00:06:32.930 17:43:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.930 17:43:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.930 17:43:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1491458' 00:06:32.930 killing process with pid 1491458 00:06:32.930 17:43:36 -- common/autotest_common.sh@945 -- # kill 1491458 00:06:32.930 17:43:36 -- common/autotest_common.sh@950 -- # wait 1491458 00:06:32.930 00:06:32.930 real 0m2.887s 00:06:32.930 user 0m3.245s 00:06:32.930 sys 0m0.819s 00:06:32.930 17:43:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.930 17:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:32.930 ************************************ 00:06:32.930 END TEST locking_app_on_unlocked_coremask 00:06:32.930 ************************************ 00:06:33.194 17:43:37 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.194 17:43:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.194 17:43:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.194 17:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:33.194 ************************************ 00:06:33.194 START TEST locking_app_on_locked_coremask 00:06:33.194 ************************************ 00:06:33.194 17:43:37 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:33.194 17:43:37 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1491807 00:06:33.194 17:43:37 -- event/cpu_locks.sh@116 -- # waitforlisten 1491807 /var/tmp/spdk.sock 00:06:33.194 17:43:37 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.194 17:43:37 -- common/autotest_common.sh@819 -- # '[' -z 1491807 ']' 00:06:33.194 17:43:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.194 17:43:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.194 17:43:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.194 17:43:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.194 17:43:37 -- common/autotest_common.sh@10 -- # set +x 00:06:33.194 [2024-07-22 17:43:37.278093] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:33.194 [2024-07-22 17:43:37.278149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491807 ] 00:06:33.194 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.194 [2024-07-22 17:43:37.357371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.194 [2024-07-22 17:43:37.416805] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.194 [2024-07-22 17:43:37.416922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.835 17:43:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.835 17:43:38 -- common/autotest_common.sh@852 -- # return 0 00:06:33.835 17:43:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1491981 00:06:33.835 17:43:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1491981 /var/tmp/spdk2.sock 00:06:33.835 17:43:38 -- common/autotest_common.sh@640 -- # local es=0 00:06:33.835 17:43:38 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.835 17:43:38 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1491981 /var/tmp/spdk2.sock 00:06:33.835 17:43:38 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:33.835 17:43:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.835 17:43:38 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:33.836 17:43:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:33.836 17:43:38 -- common/autotest_common.sh@643 -- # waitforlisten 1491981 /var/tmp/spdk2.sock 00:06:33.836 17:43:38 -- common/autotest_common.sh@819 -- # '[' -z 1491981 ']' 00:06:33.836 17:43:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.836 17:43:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.836 17:43:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.836 17:43:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.836 17:43:38 -- common/autotest_common.sh@10 -- # set +x 00:06:34.097 [2024-07-22 17:43:38.159212] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:34.097 [2024-07-22 17:43:38.159280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1491981 ] 00:06:34.097 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.097 [2024-07-22 17:43:38.250591] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1491807 has claimed it. 00:06:34.097 [2024-07-22 17:43:38.250625] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1491981) - No such process 00:06:34.667 ERROR: process (pid: 1491981) is no longer running 00:06:34.667 17:43:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.667 17:43:38 -- common/autotest_common.sh@852 -- # return 1 00:06:34.667 17:43:38 -- common/autotest_common.sh@643 -- # es=1 00:06:34.667 17:43:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.667 17:43:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.667 17:43:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.667 17:43:38 -- event/cpu_locks.sh@122 -- # locks_exist 1491807 00:06:34.667 17:43:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.667 17:43:38 -- event/cpu_locks.sh@22 -- # lslocks -p 1491807 00:06:34.927 lslocks: write error 00:06:34.927 17:43:39 -- event/cpu_locks.sh@124 -- # killprocess 1491807 00:06:34.927 17:43:39 -- common/autotest_common.sh@926 -- # '[' -z 1491807 ']' 00:06:34.927 17:43:39 -- common/autotest_common.sh@930 -- # kill -0 1491807 00:06:34.927 17:43:39 -- common/autotest_common.sh@931 -- # uname 00:06:34.927 17:43:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.927 17:43:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1491807 00:06:34.927 17:43:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.927 17:43:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.927 17:43:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1491807' 00:06:34.927 killing process with pid 1491807 00:06:34.927 17:43:39 -- common/autotest_common.sh@945 -- # kill 1491807 00:06:34.927 17:43:39 -- common/autotest_common.sh@950 -- # wait 1491807 00:06:35.189 00:06:35.189 real 0m2.034s 00:06:35.189 user 0m2.355s 00:06:35.189 sys 0m0.513s 00:06:35.189 17:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.189 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:06:35.189 ************************************ 00:06:35.189 END TEST locking_app_on_locked_coremask 00:06:35.189 ************************************ 00:06:35.189 17:43:39 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:35.189 17:43:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.189 17:43:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.189 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:06:35.189 ************************************ 00:06:35.189 START TEST locking_overlapped_coremask 00:06:35.189 ************************************ 00:06:35.189 17:43:39 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:35.189 17:43:39 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1492158 00:06:35.189 17:43:39 -- event/cpu_locks.sh@133 -- # waitforlisten 1492158 /var/tmp/spdk.sock 00:06:35.189 17:43:39 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:35.189 17:43:39 -- common/autotest_common.sh@819 -- # '[' -z 1492158 ']' 00:06:35.189 17:43:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.189 17:43:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.189 17:43:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.189 17:43:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.189 17:43:39 -- common/autotest_common.sh@10 -- # set +x 00:06:35.189 [2024-07-22 17:43:39.358283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:35.189 [2024-07-22 17:43:39.358343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492158 ] 00:06:35.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.189 [2024-07-22 17:43:39.441015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.450 [2024-07-22 17:43:39.503987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.450 [2024-07-22 17:43:39.504204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.450 [2024-07-22 17:43:39.504229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.450 [2024-07-22 17:43:39.504231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.022 17:43:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.022 17:43:40 -- common/autotest_common.sh@852 -- # return 0 00:06:36.022 17:43:40 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1492464 00:06:36.022 17:43:40 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1492464 /var/tmp/spdk2.sock 00:06:36.022 17:43:40 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.022 17:43:40 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:36.022 17:43:40 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1492464 /var/tmp/spdk2.sock 00:06:36.022 17:43:40 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.022 17:43:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.022 17:43:40 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.022 17:43:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.022 17:43:40 -- common/autotest_common.sh@643 -- # waitforlisten 1492464 /var/tmp/spdk2.sock 00:06:36.022 17:43:40 -- common/autotest_common.sh@819 -- # '[' -z 1492464 ']' 00:06:36.022 17:43:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.022 17:43:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.022 17:43:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.022 17:43:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.022 17:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:36.022 [2024-07-22 17:43:40.245030] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.022 [2024-07-22 17:43:40.245078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492464 ] 00:06:36.022 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.282 [2024-07-22 17:43:40.325785] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1492158 has claimed it. 00:06:36.282 [2024-07-22 17:43:40.325817] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1492464) - No such process 00:06:36.854 ERROR: process (pid: 1492464) is no longer running 00:06:36.854 17:43:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.854 17:43:40 -- common/autotest_common.sh@852 -- # return 1 00:06:36.854 17:43:40 -- common/autotest_common.sh@643 -- # es=1 00:06:36.854 17:43:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.854 17:43:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.854 17:43:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.854 17:43:40 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:36.854 17:43:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:36.854 17:43:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:36.854 17:43:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:36.854 17:43:40 -- event/cpu_locks.sh@141 -- # killprocess 1492158 00:06:36.854 17:43:40 -- common/autotest_common.sh@926 -- # '[' -z 1492158 ']' 00:06:36.854 17:43:40 -- common/autotest_common.sh@930 -- # kill -0 1492158 00:06:36.854 17:43:40 -- common/autotest_common.sh@931 -- # uname 00:06:36.854 17:43:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.854 17:43:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1492158 00:06:36.854 17:43:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.854 17:43:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.854 17:43:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1492158' 00:06:36.854 killing process with pid 1492158 00:06:36.854 17:43:40 -- common/autotest_common.sh@945 -- # kill 1492158 00:06:36.854 17:43:40 -- common/autotest_common.sh@950 -- # wait 1492158 00:06:37.115 00:06:37.115 real 0m1.837s 00:06:37.115 user 0m5.282s 00:06:37.115 sys 0m0.374s 00:06:37.115 17:43:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.115 17:43:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.115 ************************************ 00:06:37.115 END TEST locking_overlapped_coremask 00:06:37.115 ************************************ 00:06:37.115 17:43:41 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.115 17:43:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.115 17:43:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.115 17:43:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.115 ************************************ 00:06:37.115 START TEST locking_overlapped_coremask_via_rpc 00:06:37.115 ************************************ 00:06:37.115 17:43:41 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:37.115 17:43:41 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1492513 00:06:37.115 17:43:41 -- event/cpu_locks.sh@149 -- # waitforlisten 1492513 /var/tmp/spdk.sock 00:06:37.115 17:43:41 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.115 17:43:41 -- common/autotest_common.sh@819 -- # '[' -z 1492513 ']' 00:06:37.115 17:43:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.115 17:43:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.115 17:43:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.115 17:43:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.115 17:43:41 -- common/autotest_common.sh@10 -- # set +x 00:06:37.115 [2024-07-22 17:43:41.249997] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.115 [2024-07-22 17:43:41.250058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492513 ] 00:06:37.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.115 [2024-07-22 17:43:41.330585] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.115 [2024-07-22 17:43:41.330612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.375 [2024-07-22 17:43:41.393091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.375 [2024-07-22 17:43:41.393304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.375 [2024-07-22 17:43:41.393422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.375 [2024-07-22 17:43:41.393585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.946 17:43:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.946 17:43:42 -- common/autotest_common.sh@852 -- # return 0 00:06:37.946 17:43:42 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1492809 00:06:37.946 17:43:42 -- event/cpu_locks.sh@153 -- # waitforlisten 1492809 /var/tmp/spdk2.sock 00:06:37.946 17:43:42 -- common/autotest_common.sh@819 -- # '[' -z 1492809 ']' 00:06:37.946 17:43:42 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.946 17:43:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.946 17:43:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.946 17:43:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.946 17:43:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.946 17:43:42 -- common/autotest_common.sh@10 -- # set +x 00:06:37.946 [2024-07-22 17:43:42.131283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.946 [2024-07-22 17:43:42.131334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1492809 ] 00:06:37.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.946 [2024-07-22 17:43:42.210038] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.946 [2024-07-22 17:43:42.210063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.206 [2024-07-22 17:43:42.316907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.206 [2024-07-22 17:43:42.317139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.206 [2024-07-22 17:43:42.317265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.206 [2024-07-22 17:43:42.317267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.777 17:43:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.777 17:43:42 -- common/autotest_common.sh@852 -- # return 0 00:06:38.777 17:43:42 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.777 17:43:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.777 17:43:42 -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 17:43:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:38.777 17:43:42 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.777 17:43:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.777 17:43:42 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.777 17:43:42 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:38.777 17:43:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.777 17:43:42 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:38.777 17:43:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.777 17:43:42 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.777 17:43:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:38.777 17:43:42 -- common/autotest_common.sh@10 -- # set +x 00:06:38.777 [2024-07-22 17:43:42.982407] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1492513 has claimed it. 00:06:38.777 request: 00:06:38.777 { 00:06:38.777 "method": "framework_enable_cpumask_locks", 00:06:38.777 "req_id": 1 00:06:38.777 } 00:06:38.777 Got JSON-RPC error response 00:06:38.777 response: 00:06:38.777 { 00:06:38.777 "code": -32603, 00:06:38.777 "message": "Failed to claim CPU core: 2" 00:06:38.777 } 00:06:38.777 17:43:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:38.777 17:43:42 -- common/autotest_common.sh@643 -- # es=1 00:06:38.777 17:43:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.777 17:43:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.777 17:43:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.777 17:43:42 -- event/cpu_locks.sh@158 -- # waitforlisten 1492513 /var/tmp/spdk.sock 00:06:38.777 17:43:42 -- common/autotest_common.sh@819 -- # '[' -z 1492513 ']' 00:06:38.777 17:43:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.777 17:43:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.777 17:43:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.777 17:43:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.777 17:43:42 -- common/autotest_common.sh@10 -- # set +x 00:06:39.037 17:43:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.037 17:43:43 -- common/autotest_common.sh@852 -- # return 0 00:06:39.037 17:43:43 -- event/cpu_locks.sh@159 -- # waitforlisten 1492809 /var/tmp/spdk2.sock 00:06:39.037 17:43:43 -- common/autotest_common.sh@819 -- # '[' -z 1492809 ']' 00:06:39.037 17:43:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.037 17:43:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.037 17:43:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.037 17:43:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.037 17:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.297 17:43:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.297 17:43:43 -- common/autotest_common.sh@852 -- # return 0 00:06:39.297 17:43:43 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:39.297 17:43:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.297 17:43:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.297 17:43:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.297 00:06:39.297 real 0m2.200s 00:06:39.297 user 0m0.948s 00:06:39.297 sys 0m0.176s 00:06:39.297 17:43:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.297 17:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.297 ************************************ 00:06:39.297 END TEST locking_overlapped_coremask_via_rpc 00:06:39.297 ************************************ 00:06:39.297 17:43:43 -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.297 17:43:43 -- event/cpu_locks.sh@15 -- # [[ -z 1492513 ]] 00:06:39.297 17:43:43 -- event/cpu_locks.sh@15 -- # killprocess 1492513 00:06:39.297 17:43:43 -- common/autotest_common.sh@926 -- # '[' -z 1492513 ']' 00:06:39.297 17:43:43 -- common/autotest_common.sh@930 -- # kill -0 1492513 00:06:39.297 17:43:43 -- common/autotest_common.sh@931 -- # uname 00:06:39.297 17:43:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.297 17:43:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1492513 00:06:39.297 17:43:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.297 17:43:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.297 17:43:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1492513' 00:06:39.297 killing process with pid 1492513 00:06:39.297 17:43:43 -- common/autotest_common.sh@945 -- # kill 1492513 00:06:39.297 17:43:43 -- common/autotest_common.sh@950 -- # wait 1492513 00:06:39.558 17:43:43 -- event/cpu_locks.sh@16 -- # [[ -z 1492809 ]] 00:06:39.558 17:43:43 -- event/cpu_locks.sh@16 -- # killprocess 1492809 00:06:39.558 17:43:43 -- common/autotest_common.sh@926 -- # '[' -z 1492809 ']' 00:06:39.558 17:43:43 -- common/autotest_common.sh@930 -- # kill -0 1492809 00:06:39.558 17:43:43 -- common/autotest_common.sh@931 -- # uname 00:06:39.558 17:43:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.558 17:43:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1492809 00:06:39.558 17:43:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:39.558 17:43:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:39.558 17:43:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1492809' 00:06:39.558 killing process with pid 1492809 00:06:39.558 17:43:43 -- common/autotest_common.sh@945 -- # kill 1492809 00:06:39.558 17:43:43 -- common/autotest_common.sh@950 -- # wait 1492809 00:06:39.818 17:43:43 -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.818 17:43:43 -- event/cpu_locks.sh@1 -- # cleanup 00:06:39.818 17:43:43 -- event/cpu_locks.sh@15 -- # [[ -z 1492513 ]] 00:06:39.818 17:43:43 -- event/cpu_locks.sh@15 -- # killprocess 1492513 00:06:39.818 17:43:43 -- common/autotest_common.sh@926 -- # '[' -z 1492513 ']' 00:06:39.818 17:43:43 -- common/autotest_common.sh@930 -- # kill -0 1492513 00:06:39.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1492513) - No such process 00:06:39.818 17:43:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1492513 is not found' 00:06:39.818 Process with pid 1492513 is not found 00:06:39.818 17:43:43 -- event/cpu_locks.sh@16 -- # [[ -z 1492809 ]] 00:06:39.818 17:43:43 -- event/cpu_locks.sh@16 -- # killprocess 1492809 00:06:39.818 17:43:43 -- common/autotest_common.sh@926 -- # '[' -z 1492809 ']' 00:06:39.818 17:43:43 -- common/autotest_common.sh@930 -- # kill -0 1492809 00:06:39.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1492809) - No such process 00:06:39.818 17:43:43 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1492809 is not found' 00:06:39.818 Process with pid 1492809 is not found 00:06:39.818 17:43:43 -- event/cpu_locks.sh@18 -- # rm -f 00:06:39.818 00:06:39.818 real 0m15.875s 00:06:39.818 user 0m28.651s 00:06:39.818 sys 0m4.529s 00:06:39.818 17:43:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.818 17:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.818 ************************************ 00:06:39.818 END TEST cpu_locks 00:06:39.818 ************************************ 00:06:39.818 00:06:39.818 real 0m42.834s 00:06:39.818 user 1m26.081s 00:06:39.818 sys 0m7.678s 00:06:39.818 17:43:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.818 17:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:39.818 ************************************ 00:06:39.818 END TEST event 00:06:39.819 ************************************ 00:06:39.819 17:43:44 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:39.819 17:43:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.819 17:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.819 17:43:44 -- common/autotest_common.sh@10 -- # set +x 00:06:39.819 ************************************ 00:06:39.819 START TEST thread 00:06:39.819 ************************************ 00:06:39.819 17:43:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:40.079 * Looking for test storage... 00:06:40.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:40.079 17:43:44 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.079 17:43:44 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:40.079 17:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.079 17:43:44 -- common/autotest_common.sh@10 -- # set +x 00:06:40.079 ************************************ 00:06:40.080 START TEST thread_poller_perf 00:06:40.080 ************************************ 00:06:40.080 17:43:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.080 [2024-07-22 17:43:44.143825] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:40.080 [2024-07-22 17:43:44.143945] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493214 ] 00:06:40.080 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.080 [2024-07-22 17:43:44.233265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.080 [2024-07-22 17:43:44.309738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.080 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.464 ====================================== 00:06:41.464 busy:2615761524 (cyc) 00:06:41.464 total_run_count: 299000 00:06:41.464 tsc_hz: 2600000000 (cyc) 00:06:41.464 ====================================== 00:06:41.464 poller_cost: 8748 (cyc), 3364 (nsec) 00:06:41.464 00:06:41.464 real 0m1.248s 00:06:41.464 user 0m1.146s 00:06:41.464 sys 0m0.096s 00:06:41.464 17:43:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.464 17:43:45 -- common/autotest_common.sh@10 -- # set +x 00:06:41.464 ************************************ 00:06:41.464 END TEST thread_poller_perf 00:06:41.464 ************************************ 00:06:41.464 17:43:45 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.464 17:43:45 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:41.464 17:43:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.464 17:43:45 -- common/autotest_common.sh@10 -- # set +x 00:06:41.464 ************************************ 00:06:41.464 START TEST thread_poller_perf 00:06:41.464 ************************************ 00:06:41.464 17:43:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.464 [2024-07-22 17:43:45.422291] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:41.464 [2024-07-22 17:43:45.422354] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493464 ] 00:06:41.464 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.464 [2024-07-22 17:43:45.502233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.464 [2024-07-22 17:43:45.561421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.464 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:42.406 ====================================== 00:06:42.406 busy:2602355202 (cyc) 00:06:42.406 total_run_count: 4129000 00:06:42.406 tsc_hz: 2600000000 (cyc) 00:06:42.406 ====================================== 00:06:42.406 poller_cost: 630 (cyc), 242 (nsec) 00:06:42.406 00:06:42.406 real 0m1.198s 00:06:42.406 user 0m1.111s 00:06:42.406 sys 0m0.082s 00:06:42.406 17:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.406 17:43:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.406 ************************************ 00:06:42.406 END TEST thread_poller_perf 00:06:42.406 ************************************ 00:06:42.406 17:43:46 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:42.406 00:06:42.406 real 0m2.629s 00:06:42.406 user 0m2.324s 00:06:42.406 sys 0m0.319s 00:06:42.406 17:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.406 17:43:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.406 ************************************ 00:06:42.406 END TEST thread 00:06:42.406 ************************************ 00:06:42.667 17:43:46 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.667 17:43:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.667 17:43:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.667 17:43:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.667 ************************************ 00:06:42.667 START TEST accel 00:06:42.667 ************************************ 00:06:42.667 17:43:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:42.667 * Looking for test storage... 00:06:42.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:42.667 17:43:46 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:42.667 17:43:46 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:42.667 17:43:46 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.667 17:43:46 -- accel/accel.sh@59 -- # spdk_tgt_pid=1493627 00:06:42.667 17:43:46 -- accel/accel.sh@60 -- # waitforlisten 1493627 00:06:42.667 17:43:46 -- common/autotest_common.sh@819 -- # '[' -z 1493627 ']' 00:06:42.667 17:43:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.667 17:43:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.667 17:43:46 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:42.667 17:43:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.667 17:43:46 -- accel/accel.sh@58 -- # build_accel_config 00:06:42.667 17:43:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.667 17:43:46 -- common/autotest_common.sh@10 -- # set +x 00:06:42.667 17:43:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.667 17:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.667 17:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.667 17:43:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.667 17:43:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.667 17:43:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.667 17:43:46 -- accel/accel.sh@42 -- # jq -r . 00:06:42.667 [2024-07-22 17:43:46.837568] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.667 [2024-07-22 17:43:46.837628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493627 ] 00:06:42.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.667 [2024-07-22 17:43:46.921246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.928 [2024-07-22 17:43:46.982946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.928 [2024-07-22 17:43:46.983071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.499 17:43:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.499 17:43:47 -- common/autotest_common.sh@852 -- # return 0 00:06:43.499 17:43:47 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:43.499 17:43:47 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:43.499 17:43:47 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:43.499 17:43:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.499 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.499 17:43:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # IFS== 00:06:43.499 17:43:47 -- accel/accel.sh@64 -- # read -r opc module 00:06:43.499 17:43:47 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:43.499 17:43:47 -- accel/accel.sh@67 -- # killprocess 1493627 00:06:43.499 17:43:47 -- common/autotest_common.sh@926 -- # '[' -z 1493627 ']' 00:06:43.499 17:43:47 -- common/autotest_common.sh@930 -- # kill -0 1493627 00:06:43.499 17:43:47 -- common/autotest_common.sh@931 -- # uname 00:06:43.499 17:43:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:43.499 17:43:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1493627 00:06:43.499 17:43:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:43.499 17:43:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:43.499 17:43:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1493627' 00:06:43.499 killing process with pid 1493627 00:06:43.499 17:43:47 -- common/autotest_common.sh@945 -- # kill 1493627 00:06:43.499 17:43:47 -- common/autotest_common.sh@950 -- # wait 1493627 00:06:43.759 17:43:47 -- accel/accel.sh@68 -- # trap - ERR 00:06:43.759 17:43:47 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:43.759 17:43:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:43.759 17:43:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.759 17:43:47 -- common/autotest_common.sh@10 -- # set +x 00:06:43.759 17:43:47 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:43.759 17:43:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:43.759 17:43:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.759 17:43:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.759 17:43:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.759 17:43:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.759 17:43:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.759 17:43:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.759 17:43:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.759 17:43:47 -- accel/accel.sh@42 -- # jq -r . 00:06:43.759 17:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.759 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.020 17:43:48 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:44.020 17:43:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.020 17:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.020 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.020 ************************************ 00:06:44.020 START TEST accel_missing_filename 00:06:44.020 ************************************ 00:06:44.020 17:43:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:44.020 17:43:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.020 17:43:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:44.020 17:43:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.020 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.020 17:43:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.020 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.020 17:43:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:44.020 17:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:44.020 17:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.020 17:43:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.020 17:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.020 17:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.020 17:43:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.020 17:43:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.020 17:43:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.020 17:43:48 -- accel/accel.sh@42 -- # jq -r . 00:06:44.020 [2024-07-22 17:43:48.082143] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.020 [2024-07-22 17:43:48.082256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493950 ] 00:06:44.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.020 [2024-07-22 17:43:48.170858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.020 [2024-07-22 17:43:48.241057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.020 [2024-07-22 17:43:48.272840] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.281 [2024-07-22 17:43:48.309455] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:44.281 A filename is required. 00:06:44.281 17:43:48 -- common/autotest_common.sh@643 -- # es=234 00:06:44.281 17:43:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.281 17:43:48 -- common/autotest_common.sh@652 -- # es=106 00:06:44.281 17:43:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:44.281 17:43:48 -- common/autotest_common.sh@660 -- # es=1 00:06:44.281 17:43:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.281 00:06:44.281 real 0m0.308s 00:06:44.281 user 0m0.230s 00:06:44.281 sys 0m0.119s 00:06:44.281 17:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.281 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.281 ************************************ 00:06:44.281 END TEST accel_missing_filename 00:06:44.281 ************************************ 00:06:44.281 17:43:48 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.281 17:43:48 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:44.281 17:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.281 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.281 ************************************ 00:06:44.281 START TEST accel_compress_verify 00:06:44.281 ************************************ 00:06:44.281 17:43:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.281 17:43:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.281 17:43:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.281 17:43:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.281 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.281 17:43:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.281 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.281 17:43:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.281 17:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:44.281 17:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.281 17:43:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.281 17:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.281 17:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.281 17:43:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.281 17:43:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.281 17:43:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.281 17:43:48 -- accel/accel.sh@42 -- # jq -r . 00:06:44.281 [2024-07-22 17:43:48.430765] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.281 [2024-07-22 17:43:48.430833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493988 ] 00:06:44.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.281 [2024-07-22 17:43:48.515512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.543 [2024-07-22 17:43:48.582957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.543 [2024-07-22 17:43:48.614703] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.543 [2024-07-22 17:43:48.651126] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:44.543 00:06:44.543 Compression does not support the verify option, aborting. 00:06:44.543 17:43:48 -- common/autotest_common.sh@643 -- # es=161 00:06:44.543 17:43:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.543 17:43:48 -- common/autotest_common.sh@652 -- # es=33 00:06:44.543 17:43:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:44.543 17:43:48 -- common/autotest_common.sh@660 -- # es=1 00:06:44.543 17:43:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.543 00:06:44.543 real 0m0.300s 00:06:44.543 user 0m0.209s 00:06:44.543 sys 0m0.130s 00:06:44.543 17:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.543 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.543 ************************************ 00:06:44.543 END TEST accel_compress_verify 00:06:44.543 ************************************ 00:06:44.543 17:43:48 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:44.543 17:43:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.543 17:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.543 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.543 ************************************ 00:06:44.543 START TEST accel_wrong_workload 00:06:44.543 ************************************ 00:06:44.543 17:43:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:44.543 17:43:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.543 17:43:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:44.543 17:43:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.543 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.543 17:43:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.543 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.543 17:43:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:44.543 17:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:44.543 17:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.543 17:43:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.543 17:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.543 17:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.543 17:43:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.543 17:43:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.543 17:43:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.543 17:43:48 -- accel/accel.sh@42 -- # jq -r . 00:06:44.543 Unsupported workload type: foobar 00:06:44.543 [2024-07-22 17:43:48.771640] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:44.543 accel_perf options: 00:06:44.543 [-h help message] 00:06:44.543 [-q queue depth per core] 00:06:44.543 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.543 [-T number of threads per core 00:06:44.543 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.543 [-t time in seconds] 00:06:44.543 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.543 [ dif_verify, , dif_generate, dif_generate_copy 00:06:44.543 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.543 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.543 [-S for crc32c workload, use this seed value (default 0) 00:06:44.543 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.543 [-f for fill workload, use this BYTE value (default 255) 00:06:44.543 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.543 [-y verify result if this switch is on] 00:06:44.543 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.543 Can be used to spread operations across a wider range of memory. 00:06:44.543 17:43:48 -- common/autotest_common.sh@643 -- # es=1 00:06:44.543 17:43:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.543 17:43:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:44.543 17:43:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.543 00:06:44.543 real 0m0.035s 00:06:44.543 user 0m0.018s 00:06:44.543 sys 0m0.017s 00:06:44.543 17:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.543 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.543 ************************************ 00:06:44.543 END TEST accel_wrong_workload 00:06:44.543 ************************************ 00:06:44.543 Error: writing output failed: Broken pipe 00:06:44.543 17:43:48 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:44.543 17:43:48 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:44.543 17:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.543 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.805 ************************************ 00:06:44.805 START TEST accel_negative_buffers 00:06:44.805 ************************************ 00:06:44.805 17:43:48 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:44.805 17:43:48 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.805 17:43:48 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:44.805 17:43:48 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.805 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.805 17:43:48 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.805 17:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.805 17:43:48 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:44.805 17:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:44.805 17:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.805 17:43:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.805 17:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.805 17:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.805 17:43:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.805 17:43:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.805 17:43:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.805 17:43:48 -- accel/accel.sh@42 -- # jq -r . 00:06:44.805 -x option must be non-negative. 00:06:44.805 [2024-07-22 17:43:48.850345] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:44.805 accel_perf options: 00:06:44.805 [-h help message] 00:06:44.805 [-q queue depth per core] 00:06:44.805 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:44.805 [-T number of threads per core 00:06:44.805 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:44.805 [-t time in seconds] 00:06:44.805 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:44.805 [ dif_verify, , dif_generate, dif_generate_copy 00:06:44.805 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:44.805 [-l for compress/decompress workloads, name of uncompressed input file 00:06:44.805 [-S for crc32c workload, use this seed value (default 0) 00:06:44.805 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:44.805 [-f for fill workload, use this BYTE value (default 255) 00:06:44.805 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:44.805 [-y verify result if this switch is on] 00:06:44.805 [-a tasks to allocate per core (default: same value as -q)] 00:06:44.805 Can be used to spread operations across a wider range of memory. 00:06:44.805 17:43:48 -- common/autotest_common.sh@643 -- # es=1 00:06:44.805 17:43:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.805 17:43:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:44.805 17:43:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.805 00:06:44.805 real 0m0.036s 00:06:44.805 user 0m0.019s 00:06:44.805 sys 0m0.017s 00:06:44.805 17:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.805 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.805 ************************************ 00:06:44.805 END TEST accel_negative_buffers 00:06:44.805 ************************************ 00:06:44.805 Error: writing output failed: Broken pipe 00:06:44.805 17:43:48 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:44.805 17:43:48 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:44.805 17:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.805 17:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:44.805 ************************************ 00:06:44.805 START TEST accel_crc32c 00:06:44.805 ************************************ 00:06:44.805 17:43:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:44.805 17:43:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.805 17:43:48 -- accel/accel.sh@17 -- # local accel_module 00:06:44.805 17:43:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:44.805 17:43:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:44.805 17:43:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.805 17:43:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.805 17:43:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.805 17:43:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.805 17:43:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.805 17:43:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.805 17:43:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.805 17:43:48 -- accel/accel.sh@42 -- # jq -r . 00:06:44.805 [2024-07-22 17:43:48.929166] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.805 [2024-07-22 17:43:48.929271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494194 ] 00:06:44.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.805 [2024-07-22 17:43:49.017835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.065 [2024-07-22 17:43:49.092825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.006 17:43:50 -- accel/accel.sh@18 -- # out=' 00:06:46.006 SPDK Configuration: 00:06:46.006 Core mask: 0x1 00:06:46.006 00:06:46.006 Accel Perf Configuration: 00:06:46.006 Workload Type: crc32c 00:06:46.006 CRC-32C seed: 32 00:06:46.006 Transfer size: 4096 bytes 00:06:46.006 Vector count 1 00:06:46.006 Module: software 00:06:46.006 Queue depth: 32 00:06:46.006 Allocate depth: 32 00:06:46.006 # threads/core: 1 00:06:46.006 Run time: 1 seconds 00:06:46.006 Verify: Yes 00:06:46.006 00:06:46.006 Running for 1 seconds... 00:06:46.006 00:06:46.006 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.006 ------------------------------------------------------------------------------------ 00:06:46.006 0,0 483200/s 1887 MiB/s 0 0 00:06:46.006 ==================================================================================== 00:06:46.006 Total 483200/s 1887 MiB/s 0 0' 00:06:46.006 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.006 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.006 17:43:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:46.006 17:43:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:46.006 17:43:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.006 17:43:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.006 17:43:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.006 17:43:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.006 17:43:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.006 17:43:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.006 17:43:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.006 17:43:50 -- accel/accel.sh@42 -- # jq -r . 00:06:46.006 [2024-07-22 17:43:50.242789] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.006 [2024-07-22 17:43:50.242909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494356 ] 00:06:46.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.266 [2024-07-22 17:43:50.337445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.266 [2024-07-22 17:43:50.399491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val=0x1 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val=crc32c 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.266 17:43:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.266 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.266 17:43:50 -- accel/accel.sh@21 -- # val=32 00:06:46.266 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val=software 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val=32 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val=32 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val=1 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val=Yes 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:46.267 17:43:50 -- accel/accel.sh@21 -- # val= 00:06:46.267 17:43:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # IFS=: 00:06:46.267 17:43:50 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@21 -- # val= 00:06:47.651 17:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@21 -- # val= 00:06:47.651 17:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@21 -- # val= 00:06:47.651 17:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@21 -- # val= 00:06:47.651 17:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@21 -- # val= 00:06:47.651 17:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@21 -- # val= 00:06:47.651 17:43:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # IFS=: 00:06:47.651 17:43:51 -- accel/accel.sh@20 -- # read -r var val 00:06:47.651 17:43:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.651 17:43:51 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:47.651 17:43:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.651 00:06:47.651 real 0m2.624s 00:06:47.651 user 0m2.380s 00:06:47.651 sys 0m0.249s 00:06:47.651 17:43:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.651 17:43:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.651 ************************************ 00:06:47.651 END TEST accel_crc32c 00:06:47.651 ************************************ 00:06:47.651 17:43:51 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:47.651 17:43:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.651 17:43:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.651 17:43:51 -- common/autotest_common.sh@10 -- # set +x 00:06:47.651 ************************************ 00:06:47.651 START TEST accel_crc32c_C2 00:06:47.651 ************************************ 00:06:47.651 17:43:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:47.651 17:43:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.651 17:43:51 -- accel/accel.sh@17 -- # local accel_module 00:06:47.651 17:43:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:47.651 17:43:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:47.651 17:43:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.651 17:43:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.651 17:43:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.651 17:43:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.651 17:43:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.651 17:43:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.651 17:43:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.651 17:43:51 -- accel/accel.sh@42 -- # jq -r . 00:06:47.651 [2024-07-22 17:43:51.594347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.651 [2024-07-22 17:43:51.594436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494656 ] 00:06:47.651 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.651 [2024-07-22 17:43:51.679627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.651 [2024-07-22 17:43:51.749899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.035 17:43:52 -- accel/accel.sh@18 -- # out=' 00:06:49.035 SPDK Configuration: 00:06:49.035 Core mask: 0x1 00:06:49.035 00:06:49.035 Accel Perf Configuration: 00:06:49.035 Workload Type: crc32c 00:06:49.035 CRC-32C seed: 0 00:06:49.035 Transfer size: 4096 bytes 00:06:49.035 Vector count 2 00:06:49.035 Module: software 00:06:49.035 Queue depth: 32 00:06:49.035 Allocate depth: 32 00:06:49.035 # threads/core: 1 00:06:49.035 Run time: 1 seconds 00:06:49.035 Verify: Yes 00:06:49.035 00:06:49.035 Running for 1 seconds... 00:06:49.035 00:06:49.035 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.035 ------------------------------------------------------------------------------------ 00:06:49.035 0,0 408800/s 3193 MiB/s 0 0 00:06:49.035 ==================================================================================== 00:06:49.035 Total 408800/s 1596 MiB/s 0 0' 00:06:49.035 17:43:52 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:52 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.035 17:43:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.035 17:43:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.035 17:43:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.035 17:43:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.035 17:43:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.035 17:43:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.035 17:43:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.035 17:43:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.035 17:43:52 -- accel/accel.sh@42 -- # jq -r . 00:06:49.035 [2024-07-22 17:43:52.898191] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:49.035 [2024-07-22 17:43:52.898289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1494958 ] 00:06:49.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.035 [2024-07-22 17:43:52.981630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.035 [2024-07-22 17:43:53.044985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=0x1 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=crc32c 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=0 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=software 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=32 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=32 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=1 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val=Yes 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.035 17:43:53 -- accel/accel.sh@21 -- # val= 00:06:49.035 17:43:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # IFS=: 00:06:49.035 17:43:53 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@21 -- # val= 00:06:49.976 17:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@21 -- # val= 00:06:49.976 17:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@21 -- # val= 00:06:49.976 17:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@21 -- # val= 00:06:49.976 17:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@21 -- # val= 00:06:49.976 17:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@21 -- # val= 00:06:49.976 17:43:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # IFS=: 00:06:49.976 17:43:54 -- accel/accel.sh@20 -- # read -r var val 00:06:49.976 17:43:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.976 17:43:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:49.976 17:43:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.976 00:06:49.976 real 0m2.604s 00:06:49.976 user 0m2.374s 00:06:49.976 sys 0m0.237s 00:06:49.976 17:43:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.976 17:43:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.976 ************************************ 00:06:49.976 END TEST accel_crc32c_C2 00:06:49.976 ************************************ 00:06:49.976 17:43:54 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:49.976 17:43:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:49.976 17:43:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.976 17:43:54 -- common/autotest_common.sh@10 -- # set +x 00:06:49.976 ************************************ 00:06:49.976 START TEST accel_copy 00:06:49.976 ************************************ 00:06:49.976 17:43:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:49.976 17:43:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.976 17:43:54 -- accel/accel.sh@17 -- # local accel_module 00:06:49.976 17:43:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:49.976 17:43:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:49.976 17:43:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.976 17:43:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.976 17:43:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.976 17:43:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.976 17:43:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.976 17:43:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.976 17:43:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.976 17:43:54 -- accel/accel.sh@42 -- # jq -r . 00:06:49.976 [2024-07-22 17:43:54.242088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:49.976 [2024-07-22 17:43:54.242189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495053 ] 00:06:50.237 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.237 [2024-07-22 17:43:54.328452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.237 [2024-07-22 17:43:54.397854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.622 17:43:55 -- accel/accel.sh@18 -- # out=' 00:06:51.622 SPDK Configuration: 00:06:51.622 Core mask: 0x1 00:06:51.622 00:06:51.622 Accel Perf Configuration: 00:06:51.622 Workload Type: copy 00:06:51.622 Transfer size: 4096 bytes 00:06:51.622 Vector count 1 00:06:51.622 Module: software 00:06:51.622 Queue depth: 32 00:06:51.622 Allocate depth: 32 00:06:51.622 # threads/core: 1 00:06:51.622 Run time: 1 seconds 00:06:51.622 Verify: Yes 00:06:51.622 00:06:51.622 Running for 1 seconds... 00:06:51.622 00:06:51.622 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.622 ------------------------------------------------------------------------------------ 00:06:51.622 0,0 330432/s 1290 MiB/s 0 0 00:06:51.622 ==================================================================================== 00:06:51.622 Total 330432/s 1290 MiB/s 0 0' 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:51.622 17:43:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.622 17:43:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.622 17:43:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.622 17:43:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.622 17:43:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.622 17:43:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.622 17:43:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.622 17:43:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.622 17:43:55 -- accel/accel.sh@42 -- # jq -r . 00:06:51.622 [2024-07-22 17:43:55.547267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.622 [2024-07-22 17:43:55.547381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495303 ] 00:06:51.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.622 [2024-07-22 17:43:55.630232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.622 [2024-07-22 17:43:55.695231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val=0x1 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val=copy 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val=software 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val=32 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val=32 00:06:51.622 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.622 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.622 17:43:55 -- accel/accel.sh@21 -- # val=1 00:06:51.623 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.623 17:43:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.623 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.623 17:43:55 -- accel/accel.sh@21 -- # val=Yes 00:06:51.623 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.623 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.623 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.623 17:43:55 -- accel/accel.sh@21 -- # val= 00:06:51.623 17:43:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # IFS=: 00:06:51.623 17:43:55 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@21 -- # val= 00:06:52.563 17:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@21 -- # val= 00:06:52.563 17:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@21 -- # val= 00:06:52.563 17:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@21 -- # val= 00:06:52.563 17:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@21 -- # val= 00:06:52.563 17:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@21 -- # val= 00:06:52.563 17:43:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # IFS=: 00:06:52.563 17:43:56 -- accel/accel.sh@20 -- # read -r var val 00:06:52.563 17:43:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.563 17:43:56 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:52.563 17:43:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.563 00:06:52.563 real 0m2.606s 00:06:52.563 user 0m2.371s 00:06:52.563 sys 0m0.241s 00:06:52.563 17:43:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.563 17:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:52.563 ************************************ 00:06:52.563 END TEST accel_copy 00:06:52.563 ************************************ 00:06:52.824 17:43:56 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.824 17:43:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:52.824 17:43:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.824 17:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:52.824 ************************************ 00:06:52.824 START TEST accel_fill 00:06:52.824 ************************************ 00:06:52.824 17:43:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.824 17:43:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.824 17:43:56 -- accel/accel.sh@17 -- # local accel_module 00:06:52.824 17:43:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.824 17:43:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:52.824 17:43:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.824 17:43:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.824 17:43:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.824 17:43:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.824 17:43:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.824 17:43:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.824 17:43:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.824 17:43:56 -- accel/accel.sh@42 -- # jq -r . 00:06:52.824 [2024-07-22 17:43:56.888850] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:52.824 [2024-07-22 17:43:56.888926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495622 ] 00:06:52.824 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.824 [2024-07-22 17:43:56.973253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.824 [2024-07-22 17:43:57.036776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.207 17:43:58 -- accel/accel.sh@18 -- # out=' 00:06:54.207 SPDK Configuration: 00:06:54.207 Core mask: 0x1 00:06:54.207 00:06:54.207 Accel Perf Configuration: 00:06:54.207 Workload Type: fill 00:06:54.207 Fill pattern: 0x80 00:06:54.207 Transfer size: 4096 bytes 00:06:54.207 Vector count 1 00:06:54.207 Module: software 00:06:54.207 Queue depth: 64 00:06:54.207 Allocate depth: 64 00:06:54.207 # threads/core: 1 00:06:54.207 Run time: 1 seconds 00:06:54.207 Verify: Yes 00:06:54.207 00:06:54.207 Running for 1 seconds... 00:06:54.207 00:06:54.207 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.208 ------------------------------------------------------------------------------------ 00:06:54.208 0,0 510016/s 1992 MiB/s 0 0 00:06:54.208 ==================================================================================== 00:06:54.208 Total 510016/s 1992 MiB/s 0 0' 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.208 17:43:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.208 17:43:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.208 17:43:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.208 17:43:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.208 17:43:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.208 17:43:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.208 17:43:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.208 17:43:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.208 17:43:58 -- accel/accel.sh@42 -- # jq -r . 00:06:54.208 [2024-07-22 17:43:58.182470] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:54.208 [2024-07-22 17:43:58.182553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495795 ] 00:06:54.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.208 [2024-07-22 17:43:58.267106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.208 [2024-07-22 17:43:58.329210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=0x1 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=fill 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=0x80 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=software 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=64 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=64 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=1 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val=Yes 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.208 17:43:58 -- accel/accel.sh@21 -- # val= 00:06:54.208 17:43:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # IFS=: 00:06:54.208 17:43:58 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@21 -- # val= 00:06:55.592 17:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@21 -- # val= 00:06:55.592 17:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@21 -- # val= 00:06:55.592 17:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@21 -- # val= 00:06:55.592 17:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@21 -- # val= 00:06:55.592 17:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@21 -- # val= 00:06:55.592 17:43:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # IFS=: 00:06:55.592 17:43:59 -- accel/accel.sh@20 -- # read -r var val 00:06:55.592 17:43:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.592 17:43:59 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:55.592 17:43:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.592 00:06:55.592 real 0m2.591s 00:06:55.592 user 0m2.355s 00:06:55.592 sys 0m0.243s 00:06:55.592 17:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.592 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 ************************************ 00:06:55.592 END TEST accel_fill 00:06:55.592 ************************************ 00:06:55.592 17:43:59 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:55.592 17:43:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.592 17:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.592 17:43:59 -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 ************************************ 00:06:55.592 START TEST accel_copy_crc32c 00:06:55.592 ************************************ 00:06:55.592 17:43:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:55.592 17:43:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.592 17:43:59 -- accel/accel.sh@17 -- # local accel_module 00:06:55.592 17:43:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:55.592 17:43:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:55.592 17:43:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.592 17:43:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.592 17:43:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.592 17:43:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.592 17:43:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.592 17:43:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.592 17:43:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.592 17:43:59 -- accel/accel.sh@42 -- # jq -r . 00:06:55.592 [2024-07-22 17:43:59.524829] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:55.592 [2024-07-22 17:43:59.524905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495981 ] 00:06:55.592 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.592 [2024-07-22 17:43:59.610591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.592 [2024-07-22 17:43:59.681941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.533 17:44:00 -- accel/accel.sh@18 -- # out=' 00:06:56.533 SPDK Configuration: 00:06:56.533 Core mask: 0x1 00:06:56.533 00:06:56.533 Accel Perf Configuration: 00:06:56.533 Workload Type: copy_crc32c 00:06:56.533 CRC-32C seed: 0 00:06:56.533 Vector size: 4096 bytes 00:06:56.533 Transfer size: 4096 bytes 00:06:56.533 Vector count 1 00:06:56.533 Module: software 00:06:56.533 Queue depth: 32 00:06:56.533 Allocate depth: 32 00:06:56.533 # threads/core: 1 00:06:56.533 Run time: 1 seconds 00:06:56.533 Verify: Yes 00:06:56.533 00:06:56.533 Running for 1 seconds... 00:06:56.533 00:06:56.533 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.533 ------------------------------------------------------------------------------------ 00:06:56.533 0,0 268928/s 1050 MiB/s 0 0 00:06:56.534 ==================================================================================== 00:06:56.534 Total 268928/s 1050 MiB/s 0 0' 00:06:56.534 17:44:00 -- accel/accel.sh@20 -- # IFS=: 00:06:56.534 17:44:00 -- accel/accel.sh@20 -- # read -r var val 00:06:56.534 17:44:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:56.534 17:44:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:56.534 17:44:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.534 17:44:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.534 17:44:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.534 17:44:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.534 17:44:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.534 17:44:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.534 17:44:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.534 17:44:00 -- accel/accel.sh@42 -- # jq -r . 00:06:56.795 [2024-07-22 17:44:00.830821] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:56.795 [2024-07-22 17:44:00.830894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496269 ] 00:06:56.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.795 [2024-07-22 17:44:00.914567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.795 [2024-07-22 17:44:00.977202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=0x1 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=0 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=software 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=32 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=32 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=1 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val=Yes 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.795 17:44:01 -- accel/accel.sh@21 -- # val= 00:06:56.795 17:44:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # IFS=: 00:06:56.795 17:44:01 -- accel/accel.sh@20 -- # read -r var val 00:06:58.177 17:44:02 -- accel/accel.sh@21 -- # val= 00:06:58.178 17:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:58.178 17:44:02 -- accel/accel.sh@21 -- # val= 00:06:58.178 17:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:58.178 17:44:02 -- accel/accel.sh@21 -- # val= 00:06:58.178 17:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:58.178 17:44:02 -- accel/accel.sh@21 -- # val= 00:06:58.178 17:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:58.178 17:44:02 -- accel/accel.sh@21 -- # val= 00:06:58.178 17:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:58.178 17:44:02 -- accel/accel.sh@21 -- # val= 00:06:58.178 17:44:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # IFS=: 00:06:58.178 17:44:02 -- accel/accel.sh@20 -- # read -r var val 00:06:58.178 17:44:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.178 17:44:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:58.178 17:44:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.178 00:06:58.178 real 0m2.604s 00:06:58.178 user 0m2.366s 00:06:58.178 sys 0m0.245s 00:06:58.178 17:44:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.178 17:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:58.178 ************************************ 00:06:58.178 END TEST accel_copy_crc32c 00:06:58.178 ************************************ 00:06:58.178 17:44:02 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.178 17:44:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:58.178 17:44:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.178 17:44:02 -- common/autotest_common.sh@10 -- # set +x 00:06:58.178 ************************************ 00:06:58.178 START TEST accel_copy_crc32c_C2 00:06:58.178 ************************************ 00:06:58.178 17:44:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.178 17:44:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.178 17:44:02 -- accel/accel.sh@17 -- # local accel_module 00:06:58.178 17:44:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:58.178 17:44:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:58.178 17:44:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.178 17:44:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.178 17:44:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.178 17:44:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.178 17:44:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.178 17:44:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.178 17:44:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.178 17:44:02 -- accel/accel.sh@42 -- # jq -r . 00:06:58.178 [2024-07-22 17:44:02.173311] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.178 [2024-07-22 17:44:02.173427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496586 ] 00:06:58.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.178 [2024-07-22 17:44:02.257168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.178 [2024-07-22 17:44:02.324651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.561 17:44:03 -- accel/accel.sh@18 -- # out=' 00:06:59.561 SPDK Configuration: 00:06:59.561 Core mask: 0x1 00:06:59.561 00:06:59.561 Accel Perf Configuration: 00:06:59.561 Workload Type: copy_crc32c 00:06:59.561 CRC-32C seed: 0 00:06:59.561 Vector size: 4096 bytes 00:06:59.561 Transfer size: 8192 bytes 00:06:59.561 Vector count 2 00:06:59.561 Module: software 00:06:59.561 Queue depth: 32 00:06:59.561 Allocate depth: 32 00:06:59.561 # threads/core: 1 00:06:59.561 Run time: 1 seconds 00:06:59.561 Verify: Yes 00:06:59.561 00:06:59.561 Running for 1 seconds... 00:06:59.561 00:06:59.561 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.561 ------------------------------------------------------------------------------------ 00:06:59.561 0,0 202976/s 1585 MiB/s 0 0 00:06:59.561 ==================================================================================== 00:06:59.561 Total 202976/s 792 MiB/s 0 0' 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:59.561 17:44:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:59.561 17:44:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.561 17:44:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.561 17:44:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.561 17:44:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.561 17:44:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.561 17:44:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.561 17:44:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.561 17:44:03 -- accel/accel.sh@42 -- # jq -r . 00:06:59.561 [2024-07-22 17:44:03.473928] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:59.561 [2024-07-22 17:44:03.474039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496672 ] 00:06:59.561 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.561 [2024-07-22 17:44:03.555849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.561 [2024-07-22 17:44:03.619149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=0x1 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=0 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=software 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=32 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=32 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=1 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val=Yes 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.561 17:44:03 -- accel/accel.sh@21 -- # val= 00:06:59.561 17:44:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # IFS=: 00:06:59.561 17:44:03 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@21 -- # val= 00:07:00.592 17:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # IFS=: 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@21 -- # val= 00:07:00.592 17:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # IFS=: 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@21 -- # val= 00:07:00.592 17:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # IFS=: 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@21 -- # val= 00:07:00.592 17:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # IFS=: 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@21 -- # val= 00:07:00.592 17:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # IFS=: 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@21 -- # val= 00:07:00.592 17:44:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # IFS=: 00:07:00.592 17:44:04 -- accel/accel.sh@20 -- # read -r var val 00:07:00.592 17:44:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.592 17:44:04 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:00.592 17:44:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.592 00:07:00.592 real 0m2.598s 00:07:00.592 user 0m2.369s 00:07:00.592 sys 0m0.237s 00:07:00.592 17:44:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.592 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:00.592 ************************************ 00:07:00.592 END TEST accel_copy_crc32c_C2 00:07:00.592 ************************************ 00:07:00.592 17:44:04 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:00.592 17:44:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:00.592 17:44:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.592 17:44:04 -- common/autotest_common.sh@10 -- # set +x 00:07:00.592 ************************************ 00:07:00.592 START TEST accel_dualcast 00:07:00.592 ************************************ 00:07:00.592 17:44:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:00.592 17:44:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.592 17:44:04 -- accel/accel.sh@17 -- # local accel_module 00:07:00.592 17:44:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:00.592 17:44:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:00.592 17:44:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.592 17:44:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.592 17:44:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.592 17:44:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.592 17:44:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.592 17:44:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.592 17:44:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.592 17:44:04 -- accel/accel.sh@42 -- # jq -r . 00:07:00.592 [2024-07-22 17:44:04.813042] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:00.592 [2024-07-22 17:44:04.813117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1496926 ] 00:07:00.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.852 [2024-07-22 17:44:04.897420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.852 [2024-07-22 17:44:04.961559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.234 17:44:06 -- accel/accel.sh@18 -- # out=' 00:07:02.234 SPDK Configuration: 00:07:02.234 Core mask: 0x1 00:07:02.234 00:07:02.234 Accel Perf Configuration: 00:07:02.234 Workload Type: dualcast 00:07:02.234 Transfer size: 4096 bytes 00:07:02.234 Vector count 1 00:07:02.234 Module: software 00:07:02.234 Queue depth: 32 00:07:02.234 Allocate depth: 32 00:07:02.234 # threads/core: 1 00:07:02.234 Run time: 1 seconds 00:07:02.234 Verify: Yes 00:07:02.234 00:07:02.234 Running for 1 seconds... 00:07:02.234 00:07:02.234 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.234 ------------------------------------------------------------------------------------ 00:07:02.234 0,0 394592/s 1541 MiB/s 0 0 00:07:02.234 ==================================================================================== 00:07:02.234 Total 394592/s 1541 MiB/s 0 0' 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:02.234 17:44:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:02.234 17:44:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.234 17:44:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.234 17:44:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.234 17:44:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.234 17:44:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.234 17:44:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.234 17:44:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.234 17:44:06 -- accel/accel.sh@42 -- # jq -r . 00:07:02.234 [2024-07-22 17:44:06.110001] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:02.234 [2024-07-22 17:44:06.110103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497233 ] 00:07:02.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.234 [2024-07-22 17:44:06.192682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.234 [2024-07-22 17:44:06.257632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val=0x1 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val=dualcast 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.234 17:44:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.234 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.234 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val=software 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val=32 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val=32 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val=1 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val=Yes 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.235 17:44:06 -- accel/accel.sh@21 -- # val= 00:07:02.235 17:44:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # IFS=: 00:07:02.235 17:44:06 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@21 -- # val= 00:07:03.181 17:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # IFS=: 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@21 -- # val= 00:07:03.181 17:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # IFS=: 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@21 -- # val= 00:07:03.181 17:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # IFS=: 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@21 -- # val= 00:07:03.181 17:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # IFS=: 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@21 -- # val= 00:07:03.181 17:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # IFS=: 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@21 -- # val= 00:07:03.181 17:44:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # IFS=: 00:07:03.181 17:44:07 -- accel/accel.sh@20 -- # read -r var val 00:07:03.181 17:44:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.181 17:44:07 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:03.181 17:44:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.181 00:07:03.181 real 0m2.596s 00:07:03.181 user 0m2.363s 00:07:03.181 sys 0m0.238s 00:07:03.181 17:44:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.181 17:44:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.181 ************************************ 00:07:03.181 END TEST accel_dualcast 00:07:03.181 ************************************ 00:07:03.181 17:44:07 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:03.181 17:44:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:03.181 17:44:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.181 17:44:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.181 ************************************ 00:07:03.181 START TEST accel_compare 00:07:03.181 ************************************ 00:07:03.181 17:44:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:03.181 17:44:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.181 17:44:07 -- accel/accel.sh@17 -- # local accel_module 00:07:03.181 17:44:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:03.181 17:44:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:03.181 17:44:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.181 17:44:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.181 17:44:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.181 17:44:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.181 17:44:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.181 17:44:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.181 17:44:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.181 17:44:07 -- accel/accel.sh@42 -- # jq -r . 00:07:03.181 [2024-07-22 17:44:07.454072] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:03.181 [2024-07-22 17:44:07.454149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497521 ] 00:07:03.442 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.442 [2024-07-22 17:44:07.540444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.442 [2024-07-22 17:44:07.608465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.826 17:44:08 -- accel/accel.sh@18 -- # out=' 00:07:04.826 SPDK Configuration: 00:07:04.826 Core mask: 0x1 00:07:04.826 00:07:04.826 Accel Perf Configuration: 00:07:04.826 Workload Type: compare 00:07:04.826 Transfer size: 4096 bytes 00:07:04.826 Vector count 1 00:07:04.826 Module: software 00:07:04.826 Queue depth: 32 00:07:04.826 Allocate depth: 32 00:07:04.826 # threads/core: 1 00:07:04.826 Run time: 1 seconds 00:07:04.826 Verify: Yes 00:07:04.826 00:07:04.826 Running for 1 seconds... 00:07:04.826 00:07:04.826 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.826 ------------------------------------------------------------------------------------ 00:07:04.826 0,0 473280/s 1848 MiB/s 0 0 00:07:04.826 ==================================================================================== 00:07:04.826 Total 473280/s 1848 MiB/s 0 0' 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:04.826 17:44:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:04.826 17:44:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.826 17:44:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.826 17:44:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.826 17:44:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.826 17:44:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.826 17:44:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.826 17:44:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.826 17:44:08 -- accel/accel.sh@42 -- # jq -r . 00:07:04.826 [2024-07-22 17:44:08.755263] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:04.826 [2024-07-22 17:44:08.755335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497595 ] 00:07:04.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.826 [2024-07-22 17:44:08.840367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.826 [2024-07-22 17:44:08.902800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=0x1 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=compare 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=software 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=32 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=32 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=1 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.826 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.826 17:44:08 -- accel/accel.sh@21 -- # val=Yes 00:07:04.826 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.827 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.827 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.827 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.827 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.827 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.827 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:04.827 17:44:08 -- accel/accel.sh@21 -- # val= 00:07:04.827 17:44:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.827 17:44:08 -- accel/accel.sh@20 -- # IFS=: 00:07:04.827 17:44:08 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@21 -- # val= 00:07:05.769 17:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@21 -- # val= 00:07:05.769 17:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@21 -- # val= 00:07:05.769 17:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@21 -- # val= 00:07:05.769 17:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@21 -- # val= 00:07:05.769 17:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@21 -- # val= 00:07:05.769 17:44:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # IFS=: 00:07:05.769 17:44:10 -- accel/accel.sh@20 -- # read -r var val 00:07:05.769 17:44:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.769 17:44:10 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:05.769 17:44:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.769 00:07:05.769 real 0m2.602s 00:07:05.770 user 0m2.378s 00:07:05.770 sys 0m0.231s 00:07:05.770 17:44:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.770 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:07:05.770 ************************************ 00:07:05.770 END TEST accel_compare 00:07:05.770 ************************************ 00:07:06.030 17:44:10 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:06.030 17:44:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.030 17:44:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.030 17:44:10 -- common/autotest_common.sh@10 -- # set +x 00:07:06.030 ************************************ 00:07:06.030 START TEST accel_xor 00:07:06.030 ************************************ 00:07:06.030 17:44:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:06.030 17:44:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.030 17:44:10 -- accel/accel.sh@17 -- # local accel_module 00:07:06.030 17:44:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:06.030 17:44:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:06.030 17:44:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.030 17:44:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.030 17:44:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.030 17:44:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.030 17:44:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.030 17:44:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.030 17:44:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.031 17:44:10 -- accel/accel.sh@42 -- # jq -r . 00:07:06.031 [2024-07-22 17:44:10.095600] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:06.031 [2024-07-22 17:44:10.095672] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497893 ] 00:07:06.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.031 [2024-07-22 17:44:10.179876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.031 [2024-07-22 17:44:10.240386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.415 17:44:11 -- accel/accel.sh@18 -- # out=' 00:07:07.415 SPDK Configuration: 00:07:07.415 Core mask: 0x1 00:07:07.415 00:07:07.415 Accel Perf Configuration: 00:07:07.415 Workload Type: xor 00:07:07.415 Source buffers: 2 00:07:07.415 Transfer size: 4096 bytes 00:07:07.415 Vector count 1 00:07:07.415 Module: software 00:07:07.415 Queue depth: 32 00:07:07.415 Allocate depth: 32 00:07:07.415 # threads/core: 1 00:07:07.415 Run time: 1 seconds 00:07:07.415 Verify: Yes 00:07:07.415 00:07:07.415 Running for 1 seconds... 00:07:07.415 00:07:07.415 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.415 ------------------------------------------------------------------------------------ 00:07:07.415 0,0 387104/s 1512 MiB/s 0 0 00:07:07.415 ==================================================================================== 00:07:07.415 Total 387104/s 1512 MiB/s 0 0' 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:07.415 17:44:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:07.415 17:44:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.415 17:44:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.415 17:44:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.415 17:44:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.415 17:44:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.415 17:44:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.415 17:44:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.415 17:44:11 -- accel/accel.sh@42 -- # jq -r . 00:07:07.415 [2024-07-22 17:44:11.388222] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:07.415 [2024-07-22 17:44:11.388325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498195 ] 00:07:07.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.415 [2024-07-22 17:44:11.471936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.415 [2024-07-22 17:44:11.535145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=0x1 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=xor 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=2 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=software 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=32 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=32 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=1 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val=Yes 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.415 17:44:11 -- accel/accel.sh@21 -- # val= 00:07:07.415 17:44:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # IFS=: 00:07:07.415 17:44:11 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@21 -- # val= 00:07:08.800 17:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@21 -- # val= 00:07:08.800 17:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@21 -- # val= 00:07:08.800 17:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@21 -- # val= 00:07:08.800 17:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@21 -- # val= 00:07:08.800 17:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@21 -- # val= 00:07:08.800 17:44:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # IFS=: 00:07:08.800 17:44:12 -- accel/accel.sh@20 -- # read -r var val 00:07:08.800 17:44:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.800 17:44:12 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:08.800 17:44:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.800 00:07:08.800 real 0m2.590s 00:07:08.800 user 0m2.369s 00:07:08.800 sys 0m0.228s 00:07:08.800 17:44:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.800 17:44:12 -- common/autotest_common.sh@10 -- # set +x 00:07:08.800 ************************************ 00:07:08.800 END TEST accel_xor 00:07:08.800 ************************************ 00:07:08.800 17:44:12 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:08.800 17:44:12 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:08.800 17:44:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.800 17:44:12 -- common/autotest_common.sh@10 -- # set +x 00:07:08.800 ************************************ 00:07:08.800 START TEST accel_xor 00:07:08.800 ************************************ 00:07:08.800 17:44:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:08.800 17:44:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.800 17:44:12 -- accel/accel.sh@17 -- # local accel_module 00:07:08.800 17:44:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:08.800 17:44:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:08.800 17:44:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.800 17:44:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.800 17:44:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.800 17:44:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.800 17:44:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.800 17:44:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.800 17:44:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.800 17:44:12 -- accel/accel.sh@42 -- # jq -r . 00:07:08.800 [2024-07-22 17:44:12.731682] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.800 [2024-07-22 17:44:12.731774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498310 ] 00:07:08.800 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.800 [2024-07-22 17:44:12.816179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.800 [2024-07-22 17:44:12.888777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.740 17:44:14 -- accel/accel.sh@18 -- # out=' 00:07:09.740 SPDK Configuration: 00:07:09.740 Core mask: 0x1 00:07:09.740 00:07:09.740 Accel Perf Configuration: 00:07:09.740 Workload Type: xor 00:07:09.740 Source buffers: 3 00:07:09.740 Transfer size: 4096 bytes 00:07:09.740 Vector count 1 00:07:09.740 Module: software 00:07:09.740 Queue depth: 32 00:07:09.740 Allocate depth: 32 00:07:09.740 # threads/core: 1 00:07:09.740 Run time: 1 seconds 00:07:09.740 Verify: Yes 00:07:09.740 00:07:09.740 Running for 1 seconds... 00:07:09.740 00:07:09.740 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.740 ------------------------------------------------------------------------------------ 00:07:09.740 0,0 362752/s 1417 MiB/s 0 0 00:07:09.740 ==================================================================================== 00:07:09.740 Total 362752/s 1417 MiB/s 0 0' 00:07:09.740 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:09.740 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:09.740 17:44:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.740 17:44:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.740 17:44:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.740 17:44:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.740 17:44:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.740 17:44:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.740 17:44:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.740 17:44:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.740 17:44:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.740 17:44:14 -- accel/accel.sh@42 -- # jq -r . 00:07:10.001 [2024-07-22 17:44:14.035484] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:10.001 [2024-07-22 17:44:14.035557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498540 ] 00:07:10.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.001 [2024-07-22 17:44:14.119974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.001 [2024-07-22 17:44:14.181743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val=0x1 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val=xor 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val=3 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val=software 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.001 17:44:14 -- accel/accel.sh@21 -- # val=32 00:07:10.001 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.001 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.002 17:44:14 -- accel/accel.sh@21 -- # val=32 00:07:10.002 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.002 17:44:14 -- accel/accel.sh@21 -- # val=1 00:07:10.002 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.002 17:44:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.002 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.002 17:44:14 -- accel/accel.sh@21 -- # val=Yes 00:07:10.002 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.002 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.002 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.002 17:44:14 -- accel/accel.sh@21 -- # val= 00:07:10.002 17:44:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # IFS=: 00:07:10.002 17:44:14 -- accel/accel.sh@20 -- # read -r var val 00:07:11.408 17:44:15 -- accel/accel.sh@21 -- # val= 00:07:11.408 17:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.408 17:44:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.408 17:44:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.408 17:44:15 -- accel/accel.sh@21 -- # val= 00:07:11.408 17:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.408 17:44:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.408 17:44:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.408 17:44:15 -- accel/accel.sh@21 -- # val= 00:07:11.408 17:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.409 17:44:15 -- accel/accel.sh@21 -- # val= 00:07:11.409 17:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.409 17:44:15 -- accel/accel.sh@21 -- # val= 00:07:11.409 17:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.409 17:44:15 -- accel/accel.sh@21 -- # val= 00:07:11.409 17:44:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # IFS=: 00:07:11.409 17:44:15 -- accel/accel.sh@20 -- # read -r var val 00:07:11.409 17:44:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.409 17:44:15 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:11.409 17:44:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.409 00:07:11.409 real 0m2.602s 00:07:11.409 user 0m2.366s 00:07:11.409 sys 0m0.242s 00:07:11.409 17:44:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.409 17:44:15 -- common/autotest_common.sh@10 -- # set +x 00:07:11.409 ************************************ 00:07:11.409 END TEST accel_xor 00:07:11.409 ************************************ 00:07:11.409 17:44:15 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:11.409 17:44:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:11.409 17:44:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.409 17:44:15 -- common/autotest_common.sh@10 -- # set +x 00:07:11.409 ************************************ 00:07:11.409 START TEST accel_dif_verify 00:07:11.409 ************************************ 00:07:11.409 17:44:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:11.409 17:44:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.409 17:44:15 -- accel/accel.sh@17 -- # local accel_module 00:07:11.409 17:44:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:11.409 17:44:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.409 17:44:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.409 17:44:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.409 17:44:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.409 17:44:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.409 17:44:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.409 17:44:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.409 17:44:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.409 17:44:15 -- accel/accel.sh@42 -- # jq -r . 00:07:11.409 [2024-07-22 17:44:15.377389] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:11.409 [2024-07-22 17:44:15.377461] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498859 ] 00:07:11.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.409 [2024-07-22 17:44:15.462999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.409 [2024-07-22 17:44:15.533577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.793 17:44:16 -- accel/accel.sh@18 -- # out=' 00:07:12.793 SPDK Configuration: 00:07:12.793 Core mask: 0x1 00:07:12.793 00:07:12.793 Accel Perf Configuration: 00:07:12.793 Workload Type: dif_verify 00:07:12.793 Vector size: 4096 bytes 00:07:12.793 Transfer size: 4096 bytes 00:07:12.793 Block size: 512 bytes 00:07:12.793 Metadata size: 8 bytes 00:07:12.793 Vector count 1 00:07:12.793 Module: software 00:07:12.793 Queue depth: 32 00:07:12.793 Allocate depth: 32 00:07:12.793 # threads/core: 1 00:07:12.793 Run time: 1 seconds 00:07:12.793 Verify: No 00:07:12.793 00:07:12.793 Running for 1 seconds... 00:07:12.793 00:07:12.793 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.793 ------------------------------------------------------------------------------------ 00:07:12.793 0,0 102592/s 407 MiB/s 0 0 00:07:12.793 ==================================================================================== 00:07:12.793 Total 102592/s 400 MiB/s 0 0' 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:12.793 17:44:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.793 17:44:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.793 17:44:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.793 17:44:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.793 17:44:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.793 17:44:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.793 17:44:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.793 17:44:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.793 17:44:16 -- accel/accel.sh@42 -- # jq -r . 00:07:12.793 [2024-07-22 17:44:16.681934] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:12.793 [2024-07-22 17:44:16.682012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499120 ] 00:07:12.793 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.793 [2024-07-22 17:44:16.763309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.793 [2024-07-22 17:44:16.825967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=0x1 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=dif_verify 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=software 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=32 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=32 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=1 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val=No 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:12.793 17:44:16 -- accel/accel.sh@21 -- # val= 00:07:12.793 17:44:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # IFS=: 00:07:12.793 17:44:16 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@21 -- # val= 00:07:13.736 17:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@21 -- # val= 00:07:13.736 17:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@21 -- # val= 00:07:13.736 17:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@21 -- # val= 00:07:13.736 17:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@21 -- # val= 00:07:13.736 17:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@21 -- # val= 00:07:13.736 17:44:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # IFS=: 00:07:13.736 17:44:17 -- accel/accel.sh@20 -- # read -r var val 00:07:13.736 17:44:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.736 17:44:17 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:13.736 17:44:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.736 00:07:13.736 real 0m2.601s 00:07:13.736 user 0m2.370s 00:07:13.736 sys 0m0.239s 00:07:13.736 17:44:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.736 17:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:13.736 ************************************ 00:07:13.736 END TEST accel_dif_verify 00:07:13.736 ************************************ 00:07:13.736 17:44:17 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:13.736 17:44:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:13.736 17:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.736 17:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:13.736 ************************************ 00:07:13.736 START TEST accel_dif_generate 00:07:13.736 ************************************ 00:07:13.736 17:44:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:13.736 17:44:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.736 17:44:17 -- accel/accel.sh@17 -- # local accel_module 00:07:13.736 17:44:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:13.736 17:44:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:13.736 17:44:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.736 17:44:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.736 17:44:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.736 17:44:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.736 17:44:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.736 17:44:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.736 17:44:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.736 17:44:17 -- accel/accel.sh@42 -- # jq -r . 00:07:13.997 [2024-07-22 17:44:18.021458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:13.997 [2024-07-22 17:44:18.021564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499220 ] 00:07:13.997 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.997 [2024-07-22 17:44:18.106265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.997 [2024-07-22 17:44:18.173201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.381 17:44:19 -- accel/accel.sh@18 -- # out=' 00:07:15.381 SPDK Configuration: 00:07:15.381 Core mask: 0x1 00:07:15.381 00:07:15.381 Accel Perf Configuration: 00:07:15.381 Workload Type: dif_generate 00:07:15.381 Vector size: 4096 bytes 00:07:15.381 Transfer size: 4096 bytes 00:07:15.381 Block size: 512 bytes 00:07:15.381 Metadata size: 8 bytes 00:07:15.381 Vector count 1 00:07:15.381 Module: software 00:07:15.381 Queue depth: 32 00:07:15.381 Allocate depth: 32 00:07:15.381 # threads/core: 1 00:07:15.381 Run time: 1 seconds 00:07:15.381 Verify: No 00:07:15.381 00:07:15.381 Running for 1 seconds... 00:07:15.381 00:07:15.381 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.381 ------------------------------------------------------------------------------------ 00:07:15.381 0,0 123872/s 491 MiB/s 0 0 00:07:15.381 ==================================================================================== 00:07:15.381 Total 123872/s 483 MiB/s 0 0' 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:15.381 17:44:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.381 17:44:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.381 17:44:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.381 17:44:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.381 17:44:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.381 17:44:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.381 17:44:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.381 17:44:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.381 17:44:19 -- accel/accel.sh@42 -- # jq -r . 00:07:15.381 [2024-07-22 17:44:19.320583] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:15.381 [2024-07-22 17:44:19.320662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499506 ] 00:07:15.381 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.381 [2024-07-22 17:44:19.405491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.381 [2024-07-22 17:44:19.465260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val=0x1 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val=dif_generate 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.381 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.381 17:44:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.381 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val=software 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val=32 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val=32 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val=1 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val=No 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.382 17:44:19 -- accel/accel.sh@21 -- # val= 00:07:15.382 17:44:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # IFS=: 00:07:15.382 17:44:19 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@21 -- # val= 00:07:16.323 17:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@21 -- # val= 00:07:16.323 17:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@21 -- # val= 00:07:16.323 17:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@21 -- # val= 00:07:16.323 17:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@21 -- # val= 00:07:16.323 17:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@21 -- # val= 00:07:16.323 17:44:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # IFS=: 00:07:16.323 17:44:20 -- accel/accel.sh@20 -- # read -r var val 00:07:16.323 17:44:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.323 17:44:20 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:16.323 17:44:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.323 00:07:16.323 real 0m2.597s 00:07:16.323 user 0m2.364s 00:07:16.323 sys 0m0.239s 00:07:16.323 17:44:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.323 17:44:20 -- common/autotest_common.sh@10 -- # set +x 00:07:16.323 ************************************ 00:07:16.323 END TEST accel_dif_generate 00:07:16.323 ************************************ 00:07:16.584 17:44:20 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:16.584 17:44:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:16.584 17:44:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.584 17:44:20 -- common/autotest_common.sh@10 -- # set +x 00:07:16.584 ************************************ 00:07:16.584 START TEST accel_dif_generate_copy 00:07:16.584 ************************************ 00:07:16.584 17:44:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:16.584 17:44:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.584 17:44:20 -- accel/accel.sh@17 -- # local accel_module 00:07:16.584 17:44:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:16.584 17:44:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:16.584 17:44:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.584 17:44:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.584 17:44:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.584 17:44:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.584 17:44:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.584 17:44:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.584 17:44:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.584 17:44:20 -- accel/accel.sh@42 -- # jq -r . 00:07:16.584 [2024-07-22 17:44:20.660881] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:16.584 [2024-07-22 17:44:20.660953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499831 ] 00:07:16.584 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.584 [2024-07-22 17:44:20.745047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.584 [2024-07-22 17:44:20.806703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.968 17:44:21 -- accel/accel.sh@18 -- # out=' 00:07:17.968 SPDK Configuration: 00:07:17.968 Core mask: 0x1 00:07:17.968 00:07:17.968 Accel Perf Configuration: 00:07:17.968 Workload Type: dif_generate_copy 00:07:17.968 Vector size: 4096 bytes 00:07:17.968 Transfer size: 4096 bytes 00:07:17.968 Vector count 1 00:07:17.968 Module: software 00:07:17.968 Queue depth: 32 00:07:17.968 Allocate depth: 32 00:07:17.968 # threads/core: 1 00:07:17.968 Run time: 1 seconds 00:07:17.968 Verify: No 00:07:17.968 00:07:17.968 Running for 1 seconds... 00:07:17.968 00:07:17.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.968 ------------------------------------------------------------------------------------ 00:07:17.968 0,0 94912/s 376 MiB/s 0 0 00:07:17.968 ==================================================================================== 00:07:17.968 Total 94912/s 370 MiB/s 0 0' 00:07:17.968 17:44:21 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:21 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.968 17:44:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.968 17:44:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.968 17:44:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.968 17:44:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.968 17:44:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.968 17:44:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.968 17:44:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.968 17:44:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.968 17:44:21 -- accel/accel.sh@42 -- # jq -r . 00:07:17.968 [2024-07-22 17:44:21.954983] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:17.968 [2024-07-22 17:44:21.955058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500009 ] 00:07:17.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.968 [2024-07-22 17:44:22.038018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.968 [2024-07-22 17:44:22.097734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=0x1 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=software 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=32 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=32 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=1 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val=No 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.968 17:44:22 -- accel/accel.sh@21 -- # val= 00:07:17.968 17:44:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # IFS=: 00:07:17.968 17:44:22 -- accel/accel.sh@20 -- # read -r var val 00:07:19.364 17:44:23 -- accel/accel.sh@21 -- # val= 00:07:19.364 17:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.364 17:44:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.364 17:44:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.364 17:44:23 -- accel/accel.sh@21 -- # val= 00:07:19.364 17:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.365 17:44:23 -- accel/accel.sh@21 -- # val= 00:07:19.365 17:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.365 17:44:23 -- accel/accel.sh@21 -- # val= 00:07:19.365 17:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.365 17:44:23 -- accel/accel.sh@21 -- # val= 00:07:19.365 17:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.365 17:44:23 -- accel/accel.sh@21 -- # val= 00:07:19.365 17:44:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # IFS=: 00:07:19.365 17:44:23 -- accel/accel.sh@20 -- # read -r var val 00:07:19.365 17:44:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.365 17:44:23 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:19.365 17:44:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.365 00:07:19.365 real 0m2.589s 00:07:19.365 user 0m2.366s 00:07:19.365 sys 0m0.229s 00:07:19.365 17:44:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.365 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 ************************************ 00:07:19.365 END TEST accel_dif_generate_copy 00:07:19.365 ************************************ 00:07:19.365 17:44:23 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:19.365 17:44:23 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.365 17:44:23 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:19.365 17:44:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.365 17:44:23 -- common/autotest_common.sh@10 -- # set +x 00:07:19.365 ************************************ 00:07:19.365 START TEST accel_comp 00:07:19.365 ************************************ 00:07:19.365 17:44:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.365 17:44:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.365 17:44:23 -- accel/accel.sh@17 -- # local accel_module 00:07:19.365 17:44:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.365 17:44:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.365 17:44:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.365 17:44:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.365 17:44:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.365 17:44:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.365 17:44:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.365 17:44:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.365 17:44:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.365 17:44:23 -- accel/accel.sh@42 -- # jq -r . 00:07:19.365 [2024-07-22 17:44:23.292594] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:19.365 [2024-07-22 17:44:23.292679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500184 ] 00:07:19.365 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.365 [2024-07-22 17:44:23.377495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.365 [2024-07-22 17:44:23.452013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.308 17:44:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.308 00:07:20.308 SPDK Configuration: 00:07:20.308 Core mask: 0x1 00:07:20.308 00:07:20.308 Accel Perf Configuration: 00:07:20.308 Workload Type: compress 00:07:20.308 Transfer size: 4096 bytes 00:07:20.308 Vector count 1 00:07:20.308 Module: software 00:07:20.308 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.308 Queue depth: 32 00:07:20.308 Allocate depth: 32 00:07:20.308 # threads/core: 1 00:07:20.308 Run time: 1 seconds 00:07:20.308 Verify: No 00:07:20.308 00:07:20.308 Running for 1 seconds... 00:07:20.308 00:07:20.308 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.308 ------------------------------------------------------------------------------------ 00:07:20.308 0,0 51616/s 215 MiB/s 0 0 00:07:20.308 ==================================================================================== 00:07:20.308 Total 51616/s 201 MiB/s 0 0' 00:07:20.308 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.308 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.308 17:44:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.308 17:44:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.308 17:44:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.308 17:44:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.308 17:44:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.308 17:44:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.308 17:44:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.308 17:44:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.308 17:44:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.308 17:44:24 -- accel/accel.sh@42 -- # jq -r . 00:07:20.569 [2024-07-22 17:44:24.604712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:20.569 [2024-07-22 17:44:24.604792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500469 ] 00:07:20.569 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.569 [2024-07-22 17:44:24.690165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.569 [2024-07-22 17:44:24.753088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=0x1 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=compress 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=software 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=32 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=32 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=1 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val=No 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:20.569 17:44:24 -- accel/accel.sh@21 -- # val= 00:07:20.569 17:44:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # IFS=: 00:07:20.569 17:44:24 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@21 -- # val= 00:07:21.953 17:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # IFS=: 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@21 -- # val= 00:07:21.953 17:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # IFS=: 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@21 -- # val= 00:07:21.953 17:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # IFS=: 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@21 -- # val= 00:07:21.953 17:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # IFS=: 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@21 -- # val= 00:07:21.953 17:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # IFS=: 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@21 -- # val= 00:07:21.953 17:44:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # IFS=: 00:07:21.953 17:44:25 -- accel/accel.sh@20 -- # read -r var val 00:07:21.953 17:44:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.953 17:44:25 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:21.953 17:44:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.953 00:07:21.953 real 0m2.614s 00:07:21.953 user 0m2.372s 00:07:21.953 sys 0m0.249s 00:07:21.953 17:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.953 17:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:21.953 ************************************ 00:07:21.953 END TEST accel_comp 00:07:21.953 ************************************ 00:07:21.953 17:44:25 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.953 17:44:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:21.953 17:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.953 17:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:21.953 ************************************ 00:07:21.953 START TEST accel_decomp 00:07:21.953 ************************************ 00:07:21.953 17:44:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.953 17:44:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.953 17:44:25 -- accel/accel.sh@17 -- # local accel_module 00:07:21.953 17:44:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.953 17:44:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:21.953 17:44:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.953 17:44:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.953 17:44:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.953 17:44:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.953 17:44:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.953 17:44:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.953 17:44:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.953 17:44:25 -- accel/accel.sh@42 -- # jq -r . 00:07:21.953 [2024-07-22 17:44:25.948090] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:21.953 [2024-07-22 17:44:25.948169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500795 ] 00:07:21.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.953 [2024-07-22 17:44:26.031698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.953 [2024-07-22 17:44:26.095900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.337 17:44:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.337 00:07:23.337 SPDK Configuration: 00:07:23.337 Core mask: 0x1 00:07:23.337 00:07:23.337 Accel Perf Configuration: 00:07:23.337 Workload Type: decompress 00:07:23.337 Transfer size: 4096 bytes 00:07:23.337 Vector count 1 00:07:23.337 Module: software 00:07:23.337 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.337 Queue depth: 32 00:07:23.337 Allocate depth: 32 00:07:23.337 # threads/core: 1 00:07:23.337 Run time: 1 seconds 00:07:23.337 Verify: Yes 00:07:23.337 00:07:23.337 Running for 1 seconds... 00:07:23.337 00:07:23.337 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.337 ------------------------------------------------------------------------------------ 00:07:23.337 0,0 68352/s 125 MiB/s 0 0 00:07:23.337 ==================================================================================== 00:07:23.337 Total 68352/s 267 MiB/s 0 0' 00:07:23.337 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.337 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.337 17:44:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.337 17:44:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:23.337 17:44:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.337 17:44:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.337 17:44:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.337 17:44:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.337 17:44:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.337 17:44:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.337 17:44:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.337 17:44:27 -- accel/accel.sh@42 -- # jq -r . 00:07:23.337 [2024-07-22 17:44:27.247001] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:23.337 [2024-07-22 17:44:27.247103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500916 ] 00:07:23.337 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.337 [2024-07-22 17:44:27.329458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.337 [2024-07-22 17:44:27.392836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.337 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.337 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.337 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.337 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.337 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=0x1 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=decompress 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=software 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=32 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=32 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=1 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val=Yes 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:23.338 17:44:27 -- accel/accel.sh@21 -- # val= 00:07:23.338 17:44:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # IFS=: 00:07:23.338 17:44:27 -- accel/accel.sh@20 -- # read -r var val 00:07:24.279 17:44:28 -- accel/accel.sh@21 -- # val= 00:07:24.279 17:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.279 17:44:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.279 17:44:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.279 17:44:28 -- accel/accel.sh@21 -- # val= 00:07:24.279 17:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.279 17:44:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.279 17:44:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.279 17:44:28 -- accel/accel.sh@21 -- # val= 00:07:24.279 17:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.279 17:44:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.279 17:44:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.279 17:44:28 -- accel/accel.sh@21 -- # val= 00:07:24.279 17:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.280 17:44:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.280 17:44:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.280 17:44:28 -- accel/accel.sh@21 -- # val= 00:07:24.280 17:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.280 17:44:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.280 17:44:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.280 17:44:28 -- accel/accel.sh@21 -- # val= 00:07:24.280 17:44:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.280 17:44:28 -- accel/accel.sh@20 -- # IFS=: 00:07:24.280 17:44:28 -- accel/accel.sh@20 -- # read -r var val 00:07:24.280 17:44:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.280 17:44:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.280 17:44:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.280 00:07:24.280 real 0m2.597s 00:07:24.280 user 0m2.373s 00:07:24.280 sys 0m0.232s 00:07:24.280 17:44:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.280 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:07:24.280 ************************************ 00:07:24.280 END TEST accel_decomp 00:07:24.280 ************************************ 00:07:24.540 17:44:28 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.540 17:44:28 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:24.540 17:44:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.540 17:44:28 -- common/autotest_common.sh@10 -- # set +x 00:07:24.540 ************************************ 00:07:24.540 START TEST accel_decmop_full 00:07:24.540 ************************************ 00:07:24.540 17:44:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.540 17:44:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.540 17:44:28 -- accel/accel.sh@17 -- # local accel_module 00:07:24.540 17:44:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.540 17:44:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:24.540 17:44:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.540 17:44:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.540 17:44:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.540 17:44:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.540 17:44:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.540 17:44:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.540 17:44:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.540 17:44:28 -- accel/accel.sh@42 -- # jq -r . 00:07:24.540 [2024-07-22 17:44:28.590337] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:24.540 [2024-07-22 17:44:28.590441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501131 ] 00:07:24.540 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.540 [2024-07-22 17:44:28.677028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.540 [2024-07-22 17:44:28.741133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.923 17:44:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.923 00:07:25.923 SPDK Configuration: 00:07:25.923 Core mask: 0x1 00:07:25.923 00:07:25.923 Accel Perf Configuration: 00:07:25.923 Workload Type: decompress 00:07:25.923 Transfer size: 111250 bytes 00:07:25.923 Vector count 1 00:07:25.923 Module: software 00:07:25.923 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.923 Queue depth: 32 00:07:25.923 Allocate depth: 32 00:07:25.923 # threads/core: 1 00:07:25.923 Run time: 1 seconds 00:07:25.923 Verify: Yes 00:07:25.923 00:07:25.923 Running for 1 seconds... 00:07:25.923 00:07:25.923 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.923 ------------------------------------------------------------------------------------ 00:07:25.923 0,0 4416/s 182 MiB/s 0 0 00:07:25.923 ==================================================================================== 00:07:25.923 Total 4416/s 468 MiB/s 0 0' 00:07:25.923 17:44:29 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:29 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.923 17:44:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:25.923 17:44:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.923 17:44:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.923 17:44:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.923 17:44:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.923 17:44:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.923 17:44:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.923 17:44:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.923 17:44:29 -- accel/accel.sh@42 -- # jq -r . 00:07:25.923 [2024-07-22 17:44:29.904546] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.923 [2024-07-22 17:44:29.904643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501441 ] 00:07:25.923 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.923 [2024-07-22 17:44:29.986793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.923 [2024-07-22 17:44:30.051188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val=0x1 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val=decompress 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.923 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.923 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.923 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val=software 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val=32 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val=32 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val=1 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val=Yes 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.924 17:44:30 -- accel/accel.sh@21 -- # val= 00:07:25.924 17:44:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.924 17:44:30 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@21 -- # val= 00:07:27.308 17:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@21 -- # val= 00:07:27.308 17:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@21 -- # val= 00:07:27.308 17:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@21 -- # val= 00:07:27.308 17:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@21 -- # val= 00:07:27.308 17:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@21 -- # val= 00:07:27.308 17:44:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # IFS=: 00:07:27.308 17:44:31 -- accel/accel.sh@20 -- # read -r var val 00:07:27.308 17:44:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.308 17:44:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.308 17:44:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.308 00:07:27.308 real 0m2.627s 00:07:27.308 user 0m2.399s 00:07:27.308 sys 0m0.235s 00:07:27.308 17:44:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.308 17:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:27.308 ************************************ 00:07:27.308 END TEST accel_decmop_full 00:07:27.308 ************************************ 00:07:27.308 17:44:31 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.308 17:44:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:27.308 17:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.308 17:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:27.308 ************************************ 00:07:27.308 START TEST accel_decomp_mcore 00:07:27.308 ************************************ 00:07:27.308 17:44:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.308 17:44:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.308 17:44:31 -- accel/accel.sh@17 -- # local accel_module 00:07:27.308 17:44:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.308 17:44:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:27.308 17:44:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.308 17:44:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.308 17:44:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.308 17:44:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.309 17:44:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.309 17:44:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.309 17:44:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.309 17:44:31 -- accel/accel.sh@42 -- # jq -r . 00:07:27.309 [2024-07-22 17:44:31.260614] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:27.309 [2024-07-22 17:44:31.260722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501761 ] 00:07:27.309 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.309 [2024-07-22 17:44:31.343492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.309 [2024-07-22 17:44:31.409897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.309 [2024-07-22 17:44:31.410025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.309 [2024-07-22 17:44:31.410191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.309 [2024-07-22 17:44:31.410194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.692 17:44:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.692 00:07:28.692 SPDK Configuration: 00:07:28.692 Core mask: 0xf 00:07:28.692 00:07:28.692 Accel Perf Configuration: 00:07:28.692 Workload Type: decompress 00:07:28.692 Transfer size: 4096 bytes 00:07:28.692 Vector count 1 00:07:28.692 Module: software 00:07:28.692 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.692 Queue depth: 32 00:07:28.692 Allocate depth: 32 00:07:28.692 # threads/core: 1 00:07:28.692 Run time: 1 seconds 00:07:28.692 Verify: Yes 00:07:28.692 00:07:28.692 Running for 1 seconds... 00:07:28.692 00:07:28.692 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.692 ------------------------------------------------------------------------------------ 00:07:28.692 0,0 63264/s 116 MiB/s 0 0 00:07:28.692 3,0 63616/s 117 MiB/s 0 0 00:07:28.692 2,0 82784/s 152 MiB/s 0 0 00:07:28.692 1,0 63584/s 117 MiB/s 0 0 00:07:28.692 ==================================================================================== 00:07:28.692 Total 273248/s 1067 MiB/s 0 0' 00:07:28.692 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.692 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.692 17:44:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:28.692 17:44:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:28.692 17:44:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.693 17:44:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.693 17:44:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.693 17:44:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.693 17:44:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.693 17:44:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.693 17:44:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.693 17:44:32 -- accel/accel.sh@42 -- # jq -r . 00:07:28.693 [2024-07-22 17:44:32.564718] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:28.693 [2024-07-22 17:44:32.564795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501830 ] 00:07:28.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.693 [2024-07-22 17:44:32.648476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.693 [2024-07-22 17:44:32.713144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.693 [2024-07-22 17:44:32.713273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.693 [2024-07-22 17:44:32.713393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.693 [2024-07-22 17:44:32.713397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=0xf 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=decompress 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=software 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=32 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=32 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=1 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val=Yes 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:28.693 17:44:32 -- accel/accel.sh@21 -- # val= 00:07:28.693 17:44:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # IFS=: 00:07:28.693 17:44:32 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@21 -- # val= 00:07:29.633 17:44:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # IFS=: 00:07:29.633 17:44:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.633 17:44:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.633 17:44:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:29.633 17:44:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.633 00:07:29.633 real 0m2.615s 00:07:29.633 user 0m8.840s 00:07:29.633 sys 0m0.237s 00:07:29.633 17:44:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.633 17:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:29.633 ************************************ 00:07:29.633 END TEST accel_decomp_mcore 00:07:29.633 ************************************ 00:07:29.633 17:44:33 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.633 17:44:33 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:29.633 17:44:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.633 17:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:29.633 ************************************ 00:07:29.633 START TEST accel_decomp_full_mcore 00:07:29.633 ************************************ 00:07:29.633 17:44:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.633 17:44:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.633 17:44:33 -- accel/accel.sh@17 -- # local accel_module 00:07:29.633 17:44:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.633 17:44:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:29.633 17:44:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.633 17:44:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.633 17:44:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.633 17:44:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.633 17:44:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.633 17:44:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.633 17:44:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.633 17:44:33 -- accel/accel.sh@42 -- # jq -r . 00:07:29.938 [2024-07-22 17:44:33.922712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:29.939 [2024-07-22 17:44:33.922814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502109 ] 00:07:29.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.939 [2024-07-22 17:44:34.011445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.939 [2024-07-22 17:44:34.088585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.939 [2024-07-22 17:44:34.088719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.939 [2024-07-22 17:44:34.088838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.939 [2024-07-22 17:44:34.088842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.346 17:44:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.346 00:07:31.346 SPDK Configuration: 00:07:31.346 Core mask: 0xf 00:07:31.346 00:07:31.346 Accel Perf Configuration: 00:07:31.346 Workload Type: decompress 00:07:31.346 Transfer size: 111250 bytes 00:07:31.346 Vector count 1 00:07:31.346 Module: software 00:07:31.346 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.346 Queue depth: 32 00:07:31.346 Allocate depth: 32 00:07:31.346 # threads/core: 1 00:07:31.346 Run time: 1 seconds 00:07:31.346 Verify: Yes 00:07:31.346 00:07:31.346 Running for 1 seconds... 00:07:31.346 00:07:31.346 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.346 ------------------------------------------------------------------------------------ 00:07:31.346 0,0 4416/s 182 MiB/s 0 0 00:07:31.346 3,0 4416/s 182 MiB/s 0 0 00:07:31.346 2,0 5792/s 239 MiB/s 0 0 00:07:31.346 1,0 4416/s 182 MiB/s 0 0 00:07:31.346 ==================================================================================== 00:07:31.346 Total 19040/s 2020 MiB/s 0 0' 00:07:31.346 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.346 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.346 17:44:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.346 17:44:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.346 17:44:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.346 17:44:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.346 17:44:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.346 17:44:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.346 17:44:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.347 17:44:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.347 17:44:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.347 17:44:35 -- accel/accel.sh@42 -- # jq -r . 00:07:31.347 [2024-07-22 17:44:35.259054] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:31.347 [2024-07-22 17:44:35.259157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502417 ] 00:07:31.347 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.347 [2024-07-22 17:44:35.341844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.347 [2024-07-22 17:44:35.403441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.347 [2024-07-22 17:44:35.403572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.347 [2024-07-22 17:44:35.403689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.347 [2024-07-22 17:44:35.403692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=0xf 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=decompress 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=software 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=32 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=32 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=1 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val=Yes 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:31.347 17:44:35 -- accel/accel.sh@21 -- # val= 00:07:31.347 17:44:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # IFS=: 00:07:31.347 17:44:35 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@21 -- # val= 00:07:32.288 17:44:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # IFS=: 00:07:32.288 17:44:36 -- accel/accel.sh@20 -- # read -r var val 00:07:32.288 17:44:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.288 17:44:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:32.288 17:44:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.288 00:07:32.288 real 0m2.660s 00:07:32.288 user 0m8.945s 00:07:32.288 sys 0m0.255s 00:07:32.288 17:44:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.288 17:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:32.288 ************************************ 00:07:32.288 END TEST accel_decomp_full_mcore 00:07:32.288 ************************************ 00:07:32.549 17:44:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.549 17:44:36 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:32.549 17:44:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.549 17:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:32.549 ************************************ 00:07:32.549 START TEST accel_decomp_mthread 00:07:32.549 ************************************ 00:07:32.549 17:44:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.549 17:44:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.549 17:44:36 -- accel/accel.sh@17 -- # local accel_module 00:07:32.549 17:44:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.549 17:44:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.549 17:44:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.549 17:44:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.549 17:44:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.549 17:44:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.549 17:44:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.549 17:44:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.549 17:44:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.549 17:44:36 -- accel/accel.sh@42 -- # jq -r . 00:07:32.549 [2024-07-22 17:44:36.622123] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:32.549 [2024-07-22 17:44:36.622199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502727 ] 00:07:32.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.549 [2024-07-22 17:44:36.705823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.549 [2024-07-22 17:44:36.780158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.932 17:44:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.932 00:07:33.932 SPDK Configuration: 00:07:33.932 Core mask: 0x1 00:07:33.932 00:07:33.932 Accel Perf Configuration: 00:07:33.932 Workload Type: decompress 00:07:33.932 Transfer size: 4096 bytes 00:07:33.932 Vector count 1 00:07:33.932 Module: software 00:07:33.932 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.932 Queue depth: 32 00:07:33.932 Allocate depth: 32 00:07:33.932 # threads/core: 2 00:07:33.932 Run time: 1 seconds 00:07:33.932 Verify: Yes 00:07:33.932 00:07:33.932 Running for 1 seconds... 00:07:33.932 00:07:33.932 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.932 ------------------------------------------------------------------------------------ 00:07:33.932 0,1 34464/s 63 MiB/s 0 0 00:07:33.932 0,0 34368/s 63 MiB/s 0 0 00:07:33.932 ==================================================================================== 00:07:33.932 Total 68832/s 268 MiB/s 0 0' 00:07:33.932 17:44:37 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:37 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.932 17:44:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.932 17:44:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.932 17:44:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.932 17:44:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.932 17:44:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.932 17:44:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.932 17:44:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.932 17:44:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.932 17:44:37 -- accel/accel.sh@42 -- # jq -r . 00:07:33.932 [2024-07-22 17:44:37.932459] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:33.932 [2024-07-22 17:44:37.932532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502783 ] 00:07:33.932 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.932 [2024-07-22 17:44:38.014861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.932 [2024-07-22 17:44:38.073946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=0x1 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=decompress 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=software 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=32 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=32 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.932 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.932 17:44:38 -- accel/accel.sh@21 -- # val=2 00:07:33.932 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.933 17:44:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.933 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.933 17:44:38 -- accel/accel.sh@21 -- # val=Yes 00:07:33.933 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.933 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.933 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.933 17:44:38 -- accel/accel.sh@21 -- # val= 00:07:33.933 17:44:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.933 17:44:38 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@21 -- # val= 00:07:35.315 17:44:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # IFS=: 00:07:35.315 17:44:39 -- accel/accel.sh@20 -- # read -r var val 00:07:35.315 17:44:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.315 17:44:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:35.315 17:44:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.315 00:07:35.315 real 0m2.610s 00:07:35.315 user 0m2.382s 00:07:35.315 sys 0m0.234s 00:07:35.315 17:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.315 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:35.315 ************************************ 00:07:35.315 END TEST accel_decomp_mthread 00:07:35.315 ************************************ 00:07:35.315 17:44:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.315 17:44:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:35.315 17:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.315 17:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:35.315 ************************************ 00:07:35.315 START TEST accel_deomp_full_mthread 00:07:35.315 ************************************ 00:07:35.315 17:44:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.315 17:44:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.315 17:44:39 -- accel/accel.sh@17 -- # local accel_module 00:07:35.315 17:44:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.315 17:44:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:35.315 17:44:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.315 17:44:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.315 17:44:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.315 17:44:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.315 17:44:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.315 17:44:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.315 17:44:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.315 17:44:39 -- accel/accel.sh@42 -- # jq -r . 00:07:35.315 [2024-07-22 17:44:39.275459] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.315 [2024-07-22 17:44:39.275532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503085 ] 00:07:35.315 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.315 [2024-07-22 17:44:39.360047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.315 [2024-07-22 17:44:39.421958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.708 17:44:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.708 00:07:36.708 SPDK Configuration: 00:07:36.708 Core mask: 0x1 00:07:36.708 00:07:36.708 Accel Perf Configuration: 00:07:36.708 Workload Type: decompress 00:07:36.708 Transfer size: 111250 bytes 00:07:36.708 Vector count 1 00:07:36.708 Module: software 00:07:36.708 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.708 Queue depth: 32 00:07:36.708 Allocate depth: 32 00:07:36.708 # threads/core: 2 00:07:36.708 Run time: 1 seconds 00:07:36.708 Verify: Yes 00:07:36.708 00:07:36.708 Running for 1 seconds... 00:07:36.709 00:07:36.709 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.709 ------------------------------------------------------------------------------------ 00:07:36.709 0,1 2240/s 92 MiB/s 0 0 00:07:36.709 0,0 2240/s 92 MiB/s 0 0 00:07:36.709 ==================================================================================== 00:07:36.709 Total 4480/s 475 MiB/s 0 0' 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:36.709 17:44:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:36.709 17:44:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.709 17:44:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.709 17:44:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.709 17:44:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.709 17:44:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.709 17:44:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.709 17:44:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.709 17:44:40 -- accel/accel.sh@42 -- # jq -r . 00:07:36.709 [2024-07-22 17:44:40.598964] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:36.709 [2024-07-22 17:44:40.599042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503387 ] 00:07:36.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.709 [2024-07-22 17:44:40.682835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.709 [2024-07-22 17:44:40.743508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=0x1 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=decompress 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=software 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=32 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=32 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=2 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val=Yes 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:36.709 17:44:40 -- accel/accel.sh@21 -- # val= 00:07:36.709 17:44:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # IFS=: 00:07:36.709 17:44:40 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@21 -- # val= 00:07:37.649 17:44:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # IFS=: 00:07:37.649 17:44:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.649 17:44:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.649 17:44:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.649 17:44:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.649 00:07:37.649 real 0m2.651s 00:07:37.649 user 0m2.432s 00:07:37.649 sys 0m0.227s 00:07:37.649 17:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.649 17:44:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.649 ************************************ 00:07:37.649 END TEST accel_deomp_full_mthread 00:07:37.649 ************************************ 00:07:37.909 17:44:41 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:37.909 17:44:41 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:37.909 17:44:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:37.909 17:44:41 -- accel/accel.sh@129 -- # build_accel_config 00:07:37.909 17:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.909 17:44:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.909 17:44:41 -- common/autotest_common.sh@10 -- # set +x 00:07:37.909 17:44:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.909 17:44:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.909 17:44:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.909 17:44:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.909 17:44:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.909 17:44:41 -- accel/accel.sh@42 -- # jq -r . 00:07:37.909 ************************************ 00:07:37.909 START TEST accel_dif_functional_tests 00:07:37.909 ************************************ 00:07:37.909 17:44:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:37.909 [2024-07-22 17:44:41.988226] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:37.909 [2024-07-22 17:44:41.988288] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503656 ] 00:07:37.909 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.909 [2024-07-22 17:44:42.072690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.909 [2024-07-22 17:44:42.144311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.909 [2024-07-22 17:44:42.144455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.909 [2024-07-22 17:44:42.144594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.169 00:07:38.169 00:07:38.169 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.169 http://cunit.sourceforge.net/ 00:07:38.169 00:07:38.169 00:07:38.169 Suite: accel_dif 00:07:38.169 Test: verify: DIF generated, GUARD check ...passed 00:07:38.169 Test: verify: DIF generated, APPTAG check ...passed 00:07:38.169 Test: verify: DIF generated, REFTAG check ...passed 00:07:38.169 Test: verify: DIF not generated, GUARD check ...[2024-07-22 17:44:42.199352] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:38.169 [2024-07-22 17:44:42.199536] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:38.169 passed 00:07:38.169 Test: verify: DIF not generated, APPTAG check ...[2024-07-22 17:44:42.199583] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:38.169 [2024-07-22 17:44:42.199597] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:38.169 passed 00:07:38.169 Test: verify: DIF not generated, REFTAG check ...[2024-07-22 17:44:42.199614] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:38.169 [2024-07-22 17:44:42.199627] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:38.169 passed 00:07:38.169 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:38.169 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-22 17:44:42.199668] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:38.169 passed 00:07:38.169 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:38.169 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:38.169 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:38.169 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-22 17:44:42.199772] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:38.169 passed 00:07:38.169 Test: generate copy: DIF generated, GUARD check ...passed 00:07:38.169 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:38.169 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:38.169 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:38.169 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:38.169 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:38.169 Test: generate copy: iovecs-len validate ...[2024-07-22 17:44:42.199947] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:38.169 passed 00:07:38.169 Test: generate copy: buffer alignment validate ...passed 00:07:38.169 00:07:38.169 Run Summary: Type Total Ran Passed Failed Inactive 00:07:38.169 suites 1 1 n/a 0 0 00:07:38.169 tests 20 20 20 0 0 00:07:38.169 asserts 204 204 204 0 n/a 00:07:38.169 00:07:38.169 Elapsed time = 0.002 seconds 00:07:38.169 00:07:38.169 real 0m0.367s 00:07:38.169 user 0m0.477s 00:07:38.169 sys 0m0.151s 00:07:38.169 17:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.169 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:38.169 ************************************ 00:07:38.169 END TEST accel_dif_functional_tests 00:07:38.169 ************************************ 00:07:38.169 00:07:38.169 real 0m55.655s 00:07:38.169 user 1m3.413s 00:07:38.169 sys 0m6.481s 00:07:38.169 17:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.169 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:38.169 ************************************ 00:07:38.169 END TEST accel 00:07:38.169 ************************************ 00:07:38.169 17:44:42 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:38.169 17:44:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.169 17:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.169 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:38.169 ************************************ 00:07:38.169 START TEST accel_rpc 00:07:38.169 ************************************ 00:07:38.169 17:44:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:38.429 * Looking for test storage... 00:07:38.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:38.429 17:44:42 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:38.429 17:44:42 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1503774 00:07:38.429 17:44:42 -- accel/accel_rpc.sh@15 -- # waitforlisten 1503774 00:07:38.429 17:44:42 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:38.429 17:44:42 -- common/autotest_common.sh@819 -- # '[' -z 1503774 ']' 00:07:38.429 17:44:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.429 17:44:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:38.429 17:44:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.429 17:44:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:38.429 17:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:38.429 [2024-07-22 17:44:42.534133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:38.429 [2024-07-22 17:44:42.534192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503774 ] 00:07:38.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.429 [2024-07-22 17:44:42.615569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.429 [2024-07-22 17:44:42.677702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.429 [2024-07-22 17:44:42.677827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.369 17:44:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:39.369 17:44:43 -- common/autotest_common.sh@852 -- # return 0 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:39.369 17:44:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:39.369 17:44:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.369 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 ************************************ 00:07:39.369 START TEST accel_assign_opcode 00:07:39.369 ************************************ 00:07:39.369 17:44:43 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:39.369 17:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.369 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 [2024-07-22 17:44:43.367774] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:39.369 17:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:39.369 17:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.369 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 [2024-07-22 17:44:43.375789] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:39.369 17:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:39.369 17:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.369 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 17:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:39.369 17:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@42 -- # grep software 00:07:39.369 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 17:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:39.369 software 00:07:39.369 00:07:39.369 real 0m0.202s 00:07:39.369 user 0m0.048s 00:07:39.369 sys 0m0.011s 00:07:39.369 17:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.369 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.369 ************************************ 00:07:39.369 END TEST accel_assign_opcode 00:07:39.369 ************************************ 00:07:39.369 17:44:43 -- accel/accel_rpc.sh@55 -- # killprocess 1503774 00:07:39.369 17:44:43 -- common/autotest_common.sh@926 -- # '[' -z 1503774 ']' 00:07:39.369 17:44:43 -- common/autotest_common.sh@930 -- # kill -0 1503774 00:07:39.369 17:44:43 -- common/autotest_common.sh@931 -- # uname 00:07:39.369 17:44:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.370 17:44:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1503774 00:07:39.631 17:44:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:39.631 17:44:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:39.631 17:44:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1503774' 00:07:39.631 killing process with pid 1503774 00:07:39.631 17:44:43 -- common/autotest_common.sh@945 -- # kill 1503774 00:07:39.631 17:44:43 -- common/autotest_common.sh@950 -- # wait 1503774 00:07:39.631 00:07:39.631 real 0m1.455s 00:07:39.631 user 0m1.572s 00:07:39.631 sys 0m0.385s 00:07:39.631 17:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.631 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 ************************************ 00:07:39.631 END TEST accel_rpc 00:07:39.631 ************************************ 00:07:39.631 17:44:43 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:39.631 17:44:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:39.631 17:44:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.631 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 ************************************ 00:07:39.631 START TEST app_cmdline 00:07:39.631 ************************************ 00:07:39.631 17:44:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:39.892 * Looking for test storage... 00:07:39.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:39.892 17:44:43 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:39.892 17:44:43 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1504143 00:07:39.892 17:44:43 -- app/cmdline.sh@18 -- # waitforlisten 1504143 00:07:39.892 17:44:43 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:39.892 17:44:43 -- common/autotest_common.sh@819 -- # '[' -z 1504143 ']' 00:07:39.892 17:44:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.892 17:44:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:39.892 17:44:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.892 17:44:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:39.892 17:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:39.892 [2024-07-22 17:44:44.038863] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.892 [2024-07-22 17:44:44.038932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504143 ] 00:07:39.892 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.892 [2024-07-22 17:44:44.126046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.152 [2024-07-22 17:44:44.192425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.152 [2024-07-22 17:44:44.192552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.721 17:44:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:40.721 17:44:44 -- common/autotest_common.sh@852 -- # return 0 00:07:40.721 17:44:44 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:40.981 { 00:07:40.981 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:40.981 "fields": { 00:07:40.981 "major": 24, 00:07:40.981 "minor": 1, 00:07:40.981 "patch": 1, 00:07:40.981 "suffix": "-pre", 00:07:40.981 "commit": "dbef7efac" 00:07:40.981 } 00:07:40.981 } 00:07:40.981 17:44:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:40.981 17:44:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:40.981 17:44:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:40.981 17:44:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:40.981 17:44:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:40.981 17:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.981 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.981 17:44:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:40.981 17:44:45 -- app/cmdline.sh@26 -- # sort 00:07:40.981 17:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.981 17:44:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:40.981 17:44:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:40.981 17:44:45 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.981 17:44:45 -- common/autotest_common.sh@640 -- # local es=0 00:07:40.981 17:44:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:40.981 17:44:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.981 17:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:40.981 17:44:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.981 17:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:40.981 17:44:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.981 17:44:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:40.981 17:44:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:40.981 17:44:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:40.981 17:44:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:41.242 request: 00:07:41.242 { 00:07:41.242 "method": "env_dpdk_get_mem_stats", 00:07:41.242 "req_id": 1 00:07:41.242 } 00:07:41.242 Got JSON-RPC error response 00:07:41.242 response: 00:07:41.242 { 00:07:41.242 "code": -32601, 00:07:41.242 "message": "Method not found" 00:07:41.242 } 00:07:41.242 17:44:45 -- common/autotest_common.sh@643 -- # es=1 00:07:41.242 17:44:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:41.242 17:44:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:41.242 17:44:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:41.242 17:44:45 -- app/cmdline.sh@1 -- # killprocess 1504143 00:07:41.242 17:44:45 -- common/autotest_common.sh@926 -- # '[' -z 1504143 ']' 00:07:41.242 17:44:45 -- common/autotest_common.sh@930 -- # kill -0 1504143 00:07:41.242 17:44:45 -- common/autotest_common.sh@931 -- # uname 00:07:41.242 17:44:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:41.242 17:44:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1504143 00:07:41.242 17:44:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:41.242 17:44:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:41.242 17:44:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1504143' 00:07:41.242 killing process with pid 1504143 00:07:41.242 17:44:45 -- common/autotest_common.sh@945 -- # kill 1504143 00:07:41.242 17:44:45 -- common/autotest_common.sh@950 -- # wait 1504143 00:07:41.503 00:07:41.503 real 0m1.640s 00:07:41.503 user 0m2.049s 00:07:41.503 sys 0m0.403s 00:07:41.503 17:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.503 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.503 ************************************ 00:07:41.503 END TEST app_cmdline 00:07:41.503 ************************************ 00:07:41.503 17:44:45 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.503 17:44:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.503 17:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.503 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.503 ************************************ 00:07:41.503 START TEST version 00:07:41.503 ************************************ 00:07:41.503 17:44:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:41.503 * Looking for test storage... 00:07:41.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:41.503 17:44:45 -- app/version.sh@17 -- # get_header_version major 00:07:41.503 17:44:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.503 17:44:45 -- app/version.sh@14 -- # cut -f2 00:07:41.503 17:44:45 -- app/version.sh@14 -- # tr -d '"' 00:07:41.503 17:44:45 -- app/version.sh@17 -- # major=24 00:07:41.503 17:44:45 -- app/version.sh@18 -- # get_header_version minor 00:07:41.503 17:44:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.503 17:44:45 -- app/version.sh@14 -- # cut -f2 00:07:41.503 17:44:45 -- app/version.sh@14 -- # tr -d '"' 00:07:41.503 17:44:45 -- app/version.sh@18 -- # minor=1 00:07:41.503 17:44:45 -- app/version.sh@19 -- # get_header_version patch 00:07:41.503 17:44:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.503 17:44:45 -- app/version.sh@14 -- # cut -f2 00:07:41.503 17:44:45 -- app/version.sh@14 -- # tr -d '"' 00:07:41.503 17:44:45 -- app/version.sh@19 -- # patch=1 00:07:41.503 17:44:45 -- app/version.sh@20 -- # get_header_version suffix 00:07:41.503 17:44:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:41.503 17:44:45 -- app/version.sh@14 -- # cut -f2 00:07:41.503 17:44:45 -- app/version.sh@14 -- # tr -d '"' 00:07:41.503 17:44:45 -- app/version.sh@20 -- # suffix=-pre 00:07:41.503 17:44:45 -- app/version.sh@22 -- # version=24.1 00:07:41.503 17:44:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:41.503 17:44:45 -- app/version.sh@25 -- # version=24.1.1 00:07:41.503 17:44:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:41.503 17:44:45 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:41.503 17:44:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:41.503 17:44:45 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:41.503 17:44:45 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:41.503 00:07:41.503 real 0m0.163s 00:07:41.503 user 0m0.093s 00:07:41.503 sys 0m0.108s 00:07:41.503 17:44:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.503 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.503 ************************************ 00:07:41.503 END TEST version 00:07:41.503 ************************************ 00:07:41.503 17:44:45 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@204 -- # uname -s 00:07:41.765 17:44:45 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:41.765 17:44:45 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:41.765 17:44:45 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:41.765 17:44:45 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:41.765 17:44:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:41.765 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 17:44:45 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:41.765 17:44:45 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:41.765 17:44:45 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.765 17:44:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:41.765 17:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.765 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 ************************************ 00:07:41.765 START TEST nvmf_tcp 00:07:41.765 ************************************ 00:07:41.765 17:44:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:41.765 * Looking for test storage... 00:07:41.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.765 17:44:45 -- nvmf/common.sh@7 -- # uname -s 00:07:41.765 17:44:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.765 17:44:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.765 17:44:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.765 17:44:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.765 17:44:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.765 17:44:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.765 17:44:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.765 17:44:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.765 17:44:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.765 17:44:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.765 17:44:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:41.765 17:44:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:41.765 17:44:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.765 17:44:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.765 17:44:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.765 17:44:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.765 17:44:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.765 17:44:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.765 17:44:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.765 17:44:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.765 17:44:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.765 17:44:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.765 17:44:45 -- paths/export.sh@5 -- # export PATH 00:07:41.765 17:44:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.765 17:44:45 -- nvmf/common.sh@46 -- # : 0 00:07:41.765 17:44:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:41.765 17:44:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:41.765 17:44:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.765 17:44:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.765 17:44:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:41.765 17:44:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:41.765 17:44:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:41.765 17:44:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:41.765 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:41.765 17:44:45 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:41.765 17:44:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:41.765 17:44:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.765 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:07:41.765 ************************************ 00:07:41.765 START TEST nvmf_example 00:07:41.765 ************************************ 00:07:41.765 17:44:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:42.026 * Looking for test storage... 00:07:42.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.026 17:44:46 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.026 17:44:46 -- nvmf/common.sh@7 -- # uname -s 00:07:42.026 17:44:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.026 17:44:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.026 17:44:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.026 17:44:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.026 17:44:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.026 17:44:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.026 17:44:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.026 17:44:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.026 17:44:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.026 17:44:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.026 17:44:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:42.026 17:44:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:07:42.027 17:44:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.027 17:44:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.027 17:44:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.027 17:44:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.027 17:44:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.027 17:44:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.027 17:44:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.027 17:44:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.027 17:44:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.027 17:44:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.027 17:44:46 -- paths/export.sh@5 -- # export PATH 00:07:42.027 17:44:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.027 17:44:46 -- nvmf/common.sh@46 -- # : 0 00:07:42.027 17:44:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:42.027 17:44:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:42.027 17:44:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:42.027 17:44:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.027 17:44:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.027 17:44:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:42.027 17:44:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:42.027 17:44:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:42.027 17:44:46 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:42.027 17:44:46 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:42.027 17:44:46 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:42.027 17:44:46 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:42.027 17:44:46 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:42.027 17:44:46 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:42.027 17:44:46 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:42.027 17:44:46 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:42.027 17:44:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:42.027 17:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:42.027 17:44:46 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:42.027 17:44:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:42.027 17:44:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.027 17:44:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:42.027 17:44:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:42.027 17:44:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:42.027 17:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.027 17:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.027 17:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.027 17:44:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:42.027 17:44:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:42.027 17:44:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:42.027 17:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:50.166 17:44:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:50.166 17:44:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:50.166 17:44:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:50.166 17:44:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:50.166 17:44:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:50.166 17:44:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:50.166 17:44:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:50.166 17:44:53 -- nvmf/common.sh@294 -- # net_devs=() 00:07:50.166 17:44:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:50.166 17:44:53 -- nvmf/common.sh@295 -- # e810=() 00:07:50.166 17:44:53 -- nvmf/common.sh@295 -- # local -ga e810 00:07:50.166 17:44:53 -- nvmf/common.sh@296 -- # x722=() 00:07:50.166 17:44:53 -- nvmf/common.sh@296 -- # local -ga x722 00:07:50.166 17:44:53 -- nvmf/common.sh@297 -- # mlx=() 00:07:50.166 17:44:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:50.166 17:44:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.166 17:44:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:50.166 17:44:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:50.166 17:44:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:50.166 17:44:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:50.166 17:44:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:50.166 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:50.166 17:44:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:50.166 17:44:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:50.166 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:50.166 17:44:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:50.166 17:44:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:50.166 17:44:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.166 17:44:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:50.166 17:44:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.166 17:44:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:50.166 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:50.166 17:44:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.166 17:44:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:50.166 17:44:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.166 17:44:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:50.166 17:44:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.166 17:44:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:50.166 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:50.166 17:44:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.166 17:44:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:50.166 17:44:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:50.166 17:44:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:50.166 17:44:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:50.166 17:44:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.166 17:44:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.166 17:44:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.166 17:44:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:50.166 17:44:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.166 17:44:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.166 17:44:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:50.166 17:44:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.166 17:44:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.166 17:44:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:50.166 17:44:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:50.166 17:44:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.166 17:44:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.166 17:44:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.166 17:44:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.166 17:44:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:50.166 17:44:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.166 17:44:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.166 17:44:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.166 17:44:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:50.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:07:50.166 00:07:50.166 --- 10.0.0.2 ping statistics --- 00:07:50.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.166 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:07:50.166 17:44:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:07:50.166 00:07:50.166 --- 10.0.0.1 ping statistics --- 00:07:50.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.166 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:07:50.166 17:44:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.166 17:44:54 -- nvmf/common.sh@410 -- # return 0 00:07:50.166 17:44:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:50.166 17:44:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.166 17:44:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:50.166 17:44:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:50.166 17:44:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.166 17:44:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:50.166 17:44:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:50.166 17:44:54 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:50.167 17:44:54 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:50.167 17:44:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:50.167 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:07:50.167 17:44:54 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:50.167 17:44:54 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:50.167 17:44:54 -- target/nvmf_example.sh@34 -- # nvmfpid=1508486 00:07:50.167 17:44:54 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.167 17:44:54 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:50.167 17:44:54 -- target/nvmf_example.sh@36 -- # waitforlisten 1508486 00:07:50.167 17:44:54 -- common/autotest_common.sh@819 -- # '[' -z 1508486 ']' 00:07:50.167 17:44:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.167 17:44:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:50.167 17:44:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.167 17:44:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:50.167 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:07:50.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.738 17:44:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:50.738 17:44:55 -- common/autotest_common.sh@852 -- # return 0 00:07:50.738 17:44:55 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:50.738 17:44:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:50.738 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 17:44:55 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.999 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.999 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.999 17:44:55 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:50.999 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.999 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.999 17:44:55 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:50.999 17:44:55 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.999 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.999 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.999 17:44:55 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:50.999 17:44:55 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.999 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.999 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.999 17:44:55 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.999 17:44:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.999 17:44:55 -- common/autotest_common.sh@10 -- # set +x 00:07:50.999 17:44:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.999 17:44:55 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:50.999 17:44:55 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:50.999 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.227 Initializing NVMe Controllers 00:08:03.227 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:03.227 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:03.227 Initialization complete. Launching workers. 00:08:03.227 ======================================================== 00:08:03.227 Latency(us) 00:08:03.227 Device Information : IOPS MiB/s Average min max 00:08:03.227 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17512.90 68.41 3656.09 807.20 23982.94 00:08:03.227 ======================================================== 00:08:03.227 Total : 17512.90 68.41 3656.09 807.20 23982.94 00:08:03.227 00:08:03.227 17:45:05 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:03.227 17:45:05 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:03.227 17:45:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:03.227 17:45:05 -- nvmf/common.sh@116 -- # sync 00:08:03.227 17:45:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:03.227 17:45:05 -- nvmf/common.sh@119 -- # set +e 00:08:03.227 17:45:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:03.227 17:45:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:03.227 rmmod nvme_tcp 00:08:03.227 rmmod nvme_fabrics 00:08:03.227 rmmod nvme_keyring 00:08:03.227 17:45:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:03.227 17:45:05 -- nvmf/common.sh@123 -- # set -e 00:08:03.227 17:45:05 -- nvmf/common.sh@124 -- # return 0 00:08:03.227 17:45:05 -- nvmf/common.sh@477 -- # '[' -n 1508486 ']' 00:08:03.227 17:45:05 -- nvmf/common.sh@478 -- # killprocess 1508486 00:08:03.227 17:45:05 -- common/autotest_common.sh@926 -- # '[' -z 1508486 ']' 00:08:03.227 17:45:05 -- common/autotest_common.sh@930 -- # kill -0 1508486 00:08:03.227 17:45:05 -- common/autotest_common.sh@931 -- # uname 00:08:03.227 17:45:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:03.227 17:45:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1508486 00:08:03.227 17:45:05 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:03.227 17:45:05 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:03.227 17:45:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1508486' 00:08:03.227 killing process with pid 1508486 00:08:03.227 17:45:05 -- common/autotest_common.sh@945 -- # kill 1508486 00:08:03.227 17:45:05 -- common/autotest_common.sh@950 -- # wait 1508486 00:08:03.227 nvmf threads initialize successfully 00:08:03.227 bdev subsystem init successfully 00:08:03.227 created a nvmf target service 00:08:03.227 create targets's poll groups done 00:08:03.227 all subsystems of target started 00:08:03.227 nvmf target is running 00:08:03.227 all subsystems of target stopped 00:08:03.227 destroy targets's poll groups done 00:08:03.227 destroyed the nvmf target service 00:08:03.227 bdev subsystem finish successfully 00:08:03.227 nvmf threads destroy successfully 00:08:03.227 17:45:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:03.227 17:45:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:03.227 17:45:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:03.227 17:45:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:03.227 17:45:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:03.227 17:45:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.227 17:45:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:03.227 17:45:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.487 17:45:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:03.487 17:45:07 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:03.487 17:45:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:03.487 17:45:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.749 00:08:03.749 real 0m21.831s 00:08:03.749 user 0m47.233s 00:08:03.749 sys 0m7.027s 00:08:03.749 17:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.749 17:45:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.749 ************************************ 00:08:03.749 END TEST nvmf_example 00:08:03.749 ************************************ 00:08:03.749 17:45:07 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.749 17:45:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:03.749 17:45:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.749 17:45:07 -- common/autotest_common.sh@10 -- # set +x 00:08:03.749 ************************************ 00:08:03.749 START TEST nvmf_filesystem 00:08:03.749 ************************************ 00:08:03.749 17:45:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:03.749 * Looking for test storage... 00:08:03.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.749 17:45:07 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:03.749 17:45:07 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:03.749 17:45:07 -- common/autotest_common.sh@34 -- # set -e 00:08:03.749 17:45:07 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:03.749 17:45:07 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:03.749 17:45:07 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:03.749 17:45:07 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:03.749 17:45:07 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:03.749 17:45:07 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:03.749 17:45:07 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:03.749 17:45:07 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:03.749 17:45:07 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:03.749 17:45:07 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:03.749 17:45:07 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:03.749 17:45:07 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:03.749 17:45:07 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:03.749 17:45:07 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:03.749 17:45:07 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:03.749 17:45:07 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:03.749 17:45:07 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:03.749 17:45:07 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:03.749 17:45:07 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:03.749 17:45:07 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:03.749 17:45:07 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.749 17:45:07 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:03.749 17:45:07 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:03.749 17:45:07 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:03.749 17:45:07 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:03.749 17:45:07 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:03.749 17:45:07 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:03.749 17:45:07 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:03.749 17:45:07 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:03.749 17:45:07 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:03.749 17:45:07 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:03.749 17:45:07 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:03.749 17:45:07 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:03.749 17:45:07 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:03.749 17:45:07 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:03.749 17:45:07 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:03.749 17:45:07 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:03.749 17:45:07 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:03.749 17:45:07 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:03.749 17:45:07 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:03.749 17:45:07 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:03.749 17:45:07 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:03.749 17:45:07 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:03.749 17:45:07 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:03.749 17:45:07 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:03.749 17:45:07 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:03.749 17:45:07 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:03.749 17:45:07 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:03.749 17:45:07 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:03.749 17:45:07 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:03.749 17:45:07 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:03.749 17:45:07 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:03.749 17:45:07 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:03.749 17:45:07 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:03.749 17:45:07 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:03.749 17:45:07 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:03.749 17:45:07 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:03.749 17:45:07 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:03.749 17:45:07 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:03.749 17:45:07 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:03.749 17:45:07 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:03.749 17:45:07 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:03.749 17:45:07 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:03.749 17:45:07 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:03.749 17:45:07 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:03.749 17:45:07 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:03.749 17:45:07 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:03.749 17:45:07 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:03.749 17:45:07 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:03.749 17:45:07 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:03.749 17:45:07 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:03.749 17:45:07 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:03.749 17:45:07 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:03.749 17:45:07 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:03.749 17:45:07 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:03.749 17:45:07 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:03.749 17:45:07 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:03.749 17:45:07 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.749 17:45:07 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:03.749 17:45:07 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.749 17:45:07 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:03.749 17:45:07 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.749 17:45:07 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:03.749 17:45:07 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:03.749 17:45:07 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:03.749 17:45:07 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:03.749 17:45:07 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:03.749 17:45:07 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:03.749 17:45:07 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:03.749 17:45:07 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:03.749 17:45:07 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:03.749 17:45:07 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:03.749 17:45:07 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:03.749 #define SPDK_CONFIG_H 00:08:03.749 #define SPDK_CONFIG_APPS 1 00:08:03.749 #define SPDK_CONFIG_ARCH native 00:08:03.749 #undef SPDK_CONFIG_ASAN 00:08:03.749 #undef SPDK_CONFIG_AVAHI 00:08:03.749 #undef SPDK_CONFIG_CET 00:08:03.749 #define SPDK_CONFIG_COVERAGE 1 00:08:03.749 #define SPDK_CONFIG_CROSS_PREFIX 00:08:03.749 #undef SPDK_CONFIG_CRYPTO 00:08:03.749 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:03.749 #undef SPDK_CONFIG_CUSTOMOCF 00:08:03.749 #undef SPDK_CONFIG_DAOS 00:08:03.749 #define SPDK_CONFIG_DAOS_DIR 00:08:03.749 #define SPDK_CONFIG_DEBUG 1 00:08:03.749 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:03.749 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:03.749 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:03.749 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:03.749 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:03.749 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:03.749 #define SPDK_CONFIG_EXAMPLES 1 00:08:03.749 #undef SPDK_CONFIG_FC 00:08:03.749 #define SPDK_CONFIG_FC_PATH 00:08:03.749 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:03.749 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:03.749 #undef SPDK_CONFIG_FUSE 00:08:03.749 #undef SPDK_CONFIG_FUZZER 00:08:03.749 #define SPDK_CONFIG_FUZZER_LIB 00:08:03.749 #undef SPDK_CONFIG_GOLANG 00:08:03.749 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:03.749 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:03.749 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:03.749 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:03.749 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:03.749 #define SPDK_CONFIG_IDXD 1 00:08:03.749 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:03.749 #undef SPDK_CONFIG_IPSEC_MB 00:08:03.749 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:03.749 #define SPDK_CONFIG_ISAL 1 00:08:03.749 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:03.749 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:03.749 #define SPDK_CONFIG_LIBDIR 00:08:03.749 #undef SPDK_CONFIG_LTO 00:08:03.749 #define SPDK_CONFIG_MAX_LCORES 00:08:03.749 #define SPDK_CONFIG_NVME_CUSE 1 00:08:03.749 #undef SPDK_CONFIG_OCF 00:08:03.749 #define SPDK_CONFIG_OCF_PATH 00:08:03.749 #define SPDK_CONFIG_OPENSSL_PATH 00:08:03.749 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:03.749 #undef SPDK_CONFIG_PGO_USE 00:08:03.749 #define SPDK_CONFIG_PREFIX /usr/local 00:08:03.749 #undef SPDK_CONFIG_RAID5F 00:08:03.749 #undef SPDK_CONFIG_RBD 00:08:03.749 #define SPDK_CONFIG_RDMA 1 00:08:03.749 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:03.749 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:03.749 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:03.749 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:03.749 #define SPDK_CONFIG_SHARED 1 00:08:03.749 #undef SPDK_CONFIG_SMA 00:08:03.749 #define SPDK_CONFIG_TESTS 1 00:08:03.749 #undef SPDK_CONFIG_TSAN 00:08:03.749 #define SPDK_CONFIG_UBLK 1 00:08:03.749 #define SPDK_CONFIG_UBSAN 1 00:08:03.749 #undef SPDK_CONFIG_UNIT_TESTS 00:08:03.749 #undef SPDK_CONFIG_URING 00:08:03.749 #define SPDK_CONFIG_URING_PATH 00:08:03.749 #undef SPDK_CONFIG_URING_ZNS 00:08:03.749 #undef SPDK_CONFIG_USDT 00:08:03.749 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:03.749 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:03.749 #undef SPDK_CONFIG_VFIO_USER 00:08:03.749 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:03.749 #define SPDK_CONFIG_VHOST 1 00:08:03.749 #define SPDK_CONFIG_VIRTIO 1 00:08:03.749 #undef SPDK_CONFIG_VTUNE 00:08:03.749 #define SPDK_CONFIG_VTUNE_DIR 00:08:03.749 #define SPDK_CONFIG_WERROR 1 00:08:03.749 #define SPDK_CONFIG_WPDK_DIR 00:08:03.749 #undef SPDK_CONFIG_XNVME 00:08:03.749 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:03.749 17:45:07 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:03.749 17:45:07 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.749 17:45:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.749 17:45:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.749 17:45:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.749 17:45:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.749 17:45:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.749 17:45:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.749 17:45:07 -- paths/export.sh@5 -- # export PATH 00:08:03.749 17:45:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.749 17:45:07 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.749 17:45:07 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:03.749 17:45:07 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.749 17:45:07 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:03.749 17:45:07 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:03.749 17:45:08 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:03.749 17:45:08 -- pm/common@16 -- # TEST_TAG=N/A 00:08:03.749 17:45:08 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:03.749 17:45:08 -- common/autotest_common.sh@52 -- # : 1 00:08:03.749 17:45:08 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:03.749 17:45:08 -- common/autotest_common.sh@56 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:03.749 17:45:08 -- common/autotest_common.sh@58 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:03.749 17:45:08 -- common/autotest_common.sh@60 -- # : 1 00:08:03.749 17:45:08 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:03.749 17:45:08 -- common/autotest_common.sh@62 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:03.749 17:45:08 -- common/autotest_common.sh@64 -- # : 00:08:03.749 17:45:08 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:03.749 17:45:08 -- common/autotest_common.sh@66 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:03.749 17:45:08 -- common/autotest_common.sh@68 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:03.749 17:45:08 -- common/autotest_common.sh@70 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:03.749 17:45:08 -- common/autotest_common.sh@72 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:03.749 17:45:08 -- common/autotest_common.sh@74 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:03.749 17:45:08 -- common/autotest_common.sh@76 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:03.749 17:45:08 -- common/autotest_common.sh@78 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:03.749 17:45:08 -- common/autotest_common.sh@80 -- # : 1 00:08:03.749 17:45:08 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:03.749 17:45:08 -- common/autotest_common.sh@82 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:03.749 17:45:08 -- common/autotest_common.sh@84 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:03.749 17:45:08 -- common/autotest_common.sh@86 -- # : 1 00:08:03.749 17:45:08 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:03.749 17:45:08 -- common/autotest_common.sh@88 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:03.749 17:45:08 -- common/autotest_common.sh@90 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:03.749 17:45:08 -- common/autotest_common.sh@92 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:03.749 17:45:08 -- common/autotest_common.sh@94 -- # : 0 00:08:03.749 17:45:08 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:03.749 17:45:08 -- common/autotest_common.sh@96 -- # : tcp 00:08:03.750 17:45:08 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:03.750 17:45:08 -- common/autotest_common.sh@98 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:03.750 17:45:08 -- common/autotest_common.sh@100 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:03.750 17:45:08 -- common/autotest_common.sh@102 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:03.750 17:45:08 -- common/autotest_common.sh@104 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:03.750 17:45:08 -- common/autotest_common.sh@106 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:03.750 17:45:08 -- common/autotest_common.sh@108 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:03.750 17:45:08 -- common/autotest_common.sh@110 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:03.750 17:45:08 -- common/autotest_common.sh@112 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:03.750 17:45:08 -- common/autotest_common.sh@114 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:03.750 17:45:08 -- common/autotest_common.sh@116 -- # : 1 00:08:03.750 17:45:08 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:03.750 17:45:08 -- common/autotest_common.sh@118 -- # : 00:08:03.750 17:45:08 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:03.750 17:45:08 -- common/autotest_common.sh@120 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:03.750 17:45:08 -- common/autotest_common.sh@122 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:03.750 17:45:08 -- common/autotest_common.sh@124 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:03.750 17:45:08 -- common/autotest_common.sh@126 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:03.750 17:45:08 -- common/autotest_common.sh@128 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:03.750 17:45:08 -- common/autotest_common.sh@130 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:03.750 17:45:08 -- common/autotest_common.sh@132 -- # : 00:08:03.750 17:45:08 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:03.750 17:45:08 -- common/autotest_common.sh@134 -- # : true 00:08:03.750 17:45:08 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:03.750 17:45:08 -- common/autotest_common.sh@136 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:03.750 17:45:08 -- common/autotest_common.sh@138 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:03.750 17:45:08 -- common/autotest_common.sh@140 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:03.750 17:45:08 -- common/autotest_common.sh@142 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:03.750 17:45:08 -- common/autotest_common.sh@144 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:03.750 17:45:08 -- common/autotest_common.sh@146 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:03.750 17:45:08 -- common/autotest_common.sh@148 -- # : e810 00:08:03.750 17:45:08 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:03.750 17:45:08 -- common/autotest_common.sh@150 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:03.750 17:45:08 -- common/autotest_common.sh@152 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:03.750 17:45:08 -- common/autotest_common.sh@154 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:03.750 17:45:08 -- common/autotest_common.sh@156 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:03.750 17:45:08 -- common/autotest_common.sh@158 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:03.750 17:45:08 -- common/autotest_common.sh@160 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:03.750 17:45:08 -- common/autotest_common.sh@163 -- # : 00:08:03.750 17:45:08 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:03.750 17:45:08 -- common/autotest_common.sh@165 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:03.750 17:45:08 -- common/autotest_common.sh@167 -- # : 0 00:08:03.750 17:45:08 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:03.750 17:45:08 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:03.750 17:45:08 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.750 17:45:08 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:03.750 17:45:08 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.750 17:45:08 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:03.750 17:45:08 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:03.750 17:45:08 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:03.750 17:45:08 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.750 17:45:08 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:03.750 17:45:08 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.750 17:45:08 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:03.750 17:45:08 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:03.750 17:45:08 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:04.012 17:45:08 -- common/autotest_common.sh@196 -- # cat 00:08:04.012 17:45:08 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:04.012 17:45:08 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:04.012 17:45:08 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:04.012 17:45:08 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:04.012 17:45:08 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:04.012 17:45:08 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:04.012 17:45:08 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:04.012 17:45:08 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:04.012 17:45:08 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:04.012 17:45:08 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:04.012 17:45:08 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:04.012 17:45:08 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:04.012 17:45:08 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:04.012 17:45:08 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:04.012 17:45:08 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:04.012 17:45:08 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:04.012 17:45:08 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:04.012 17:45:08 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:04.012 17:45:08 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:04.012 17:45:08 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:04.012 17:45:08 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:04.012 17:45:08 -- common/autotest_common.sh@249 -- # valgrind= 00:08:04.012 17:45:08 -- common/autotest_common.sh@255 -- # uname -s 00:08:04.012 17:45:08 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:04.012 17:45:08 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:04.012 17:45:08 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:04.012 17:45:08 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:04.012 17:45:08 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:04.012 17:45:08 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:04.012 17:45:08 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:04.012 17:45:08 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j128 00:08:04.012 17:45:08 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:04.012 17:45:08 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:04.012 17:45:08 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:04.012 17:45:08 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:04.012 17:45:08 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:04.012 17:45:08 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:04.012 17:45:08 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:04.012 17:45:08 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:04.012 17:45:08 -- common/autotest_common.sh@309 -- # [[ -z 1511161 ]] 00:08:04.012 17:45:08 -- common/autotest_common.sh@309 -- # kill -0 1511161 00:08:04.012 17:45:08 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:04.012 17:45:08 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:04.012 17:45:08 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:04.012 17:45:08 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:04.012 17:45:08 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:04.012 17:45:08 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:04.012 17:45:08 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:04.012 17:45:08 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:04.012 17:45:08 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.cpojKE 00:08:04.012 17:45:08 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:04.012 17:45:08 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:04.012 17:45:08 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:04.012 17:45:08 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.cpojKE/tests/target /tmp/spdk.cpojKE 00:08:04.012 17:45:08 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:04.012 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.012 17:45:08 -- common/autotest_common.sh@318 -- # df -T 00:08:04.012 17:45:08 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:04.012 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:04.012 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=954712064 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:04.012 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=4329717760 00:08:04.012 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=118724739072 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129376235520 00:08:04.012 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=10651496448 00:08:04.012 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=64685522944 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64688115712 00:08:04.012 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:08:04.012 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.012 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=25865281536 00:08:04.012 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25875247104 00:08:04.012 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=9965568 00:08:04.012 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.013 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:08:04.013 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:08:04.013 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=339968 00:08:04.013 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:08:04.013 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=163840 00:08:04.013 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.013 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.013 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.013 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=64687329280 00:08:04.013 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64688119808 00:08:04.013 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=790528 00:08:04.013 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.013 17:45:08 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:04.013 17:45:08 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:04.013 17:45:08 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937617408 00:08:04.013 17:45:08 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937621504 00:08:04.013 17:45:08 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:04.013 17:45:08 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:04.013 17:45:08 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:04.013 * Looking for test storage... 00:08:04.013 17:45:08 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:04.013 17:45:08 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:04.013 17:45:08 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.013 17:45:08 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:04.013 17:45:08 -- common/autotest_common.sh@363 -- # mount=/ 00:08:04.013 17:45:08 -- common/autotest_common.sh@365 -- # target_space=118724739072 00:08:04.013 17:45:08 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:04.013 17:45:08 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:04.013 17:45:08 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:04.013 17:45:08 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:04.013 17:45:08 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:04.013 17:45:08 -- common/autotest_common.sh@372 -- # new_size=12866088960 00:08:04.013 17:45:08 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:04.013 17:45:08 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.013 17:45:08 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.013 17:45:08 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.013 17:45:08 -- common/autotest_common.sh@380 -- # return 0 00:08:04.013 17:45:08 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:04.013 17:45:08 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:04.013 17:45:08 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:04.013 17:45:08 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:04.013 17:45:08 -- common/autotest_common.sh@1672 -- # true 00:08:04.013 17:45:08 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:04.013 17:45:08 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:04.013 17:45:08 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:04.013 17:45:08 -- common/autotest_common.sh@27 -- # exec 00:08:04.013 17:45:08 -- common/autotest_common.sh@29 -- # exec 00:08:04.013 17:45:08 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:04.013 17:45:08 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:04.013 17:45:08 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:04.013 17:45:08 -- common/autotest_common.sh@18 -- # set -x 00:08:04.013 17:45:08 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.013 17:45:08 -- nvmf/common.sh@7 -- # uname -s 00:08:04.013 17:45:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.013 17:45:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.013 17:45:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.013 17:45:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.013 17:45:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.013 17:45:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.013 17:45:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.013 17:45:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.013 17:45:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.013 17:45:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.013 17:45:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:04.013 17:45:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:04.013 17:45:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.013 17:45:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.013 17:45:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.013 17:45:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.013 17:45:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.013 17:45:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.013 17:45:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.013 17:45:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.013 17:45:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.013 17:45:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.013 17:45:08 -- paths/export.sh@5 -- # export PATH 00:08:04.013 17:45:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.013 17:45:08 -- nvmf/common.sh@46 -- # : 0 00:08:04.013 17:45:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:04.013 17:45:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:04.013 17:45:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:04.013 17:45:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.013 17:45:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.013 17:45:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:04.013 17:45:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:04.013 17:45:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:04.013 17:45:08 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:04.013 17:45:08 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:04.013 17:45:08 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:04.013 17:45:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:04.013 17:45:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.013 17:45:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:04.013 17:45:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:04.013 17:45:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:04.013 17:45:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.013 17:45:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.013 17:45:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.013 17:45:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:04.013 17:45:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:04.013 17:45:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:04.013 17:45:08 -- common/autotest_common.sh@10 -- # set +x 00:08:12.174 17:45:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:12.174 17:45:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:12.174 17:45:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:12.174 17:45:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:12.174 17:45:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:12.174 17:45:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:12.174 17:45:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:12.174 17:45:16 -- nvmf/common.sh@294 -- # net_devs=() 00:08:12.174 17:45:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:12.174 17:45:16 -- nvmf/common.sh@295 -- # e810=() 00:08:12.174 17:45:16 -- nvmf/common.sh@295 -- # local -ga e810 00:08:12.174 17:45:16 -- nvmf/common.sh@296 -- # x722=() 00:08:12.174 17:45:16 -- nvmf/common.sh@296 -- # local -ga x722 00:08:12.174 17:45:16 -- nvmf/common.sh@297 -- # mlx=() 00:08:12.174 17:45:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:12.174 17:45:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.174 17:45:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:12.174 17:45:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:12.174 17:45:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:12.174 17:45:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:12.174 17:45:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:12.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:12.174 17:45:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:12.174 17:45:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:12.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:12.174 17:45:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:12.174 17:45:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:12.174 17:45:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:12.175 17:45:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:12.175 17:45:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:12.175 17:45:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.175 17:45:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:12.175 17:45:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.175 17:45:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:12.175 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:12.175 17:45:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.175 17:45:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:12.175 17:45:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.175 17:45:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:12.175 17:45:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.175 17:45:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:12.175 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:12.175 17:45:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.175 17:45:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:12.175 17:45:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:12.175 17:45:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:12.175 17:45:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:12.175 17:45:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:12.175 17:45:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.175 17:45:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.175 17:45:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.175 17:45:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:12.175 17:45:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.175 17:45:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.175 17:45:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:12.175 17:45:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.175 17:45:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.175 17:45:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:12.175 17:45:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:12.175 17:45:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.175 17:45:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.175 17:45:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.175 17:45:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.175 17:45:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:12.175 17:45:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.175 17:45:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.175 17:45:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.175 17:45:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:12.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:08:12.175 00:08:12.175 --- 10.0.0.2 ping statistics --- 00:08:12.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.175 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:08:12.175 17:45:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:08:12.175 00:08:12.175 --- 10.0.0.1 ping statistics --- 00:08:12.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.175 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:08:12.175 17:45:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.175 17:45:16 -- nvmf/common.sh@410 -- # return 0 00:08:12.175 17:45:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:12.175 17:45:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.175 17:45:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:12.175 17:45:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:12.175 17:45:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.175 17:45:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:12.175 17:45:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:12.175 17:45:16 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:12.175 17:45:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:12.175 17:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.175 17:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 ************************************ 00:08:12.175 START TEST nvmf_filesystem_no_in_capsule 00:08:12.175 ************************************ 00:08:12.175 17:45:16 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:12.175 17:45:16 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:12.175 17:45:16 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:12.175 17:45:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:12.175 17:45:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:12.175 17:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 17:45:16 -- nvmf/common.sh@469 -- # nvmfpid=1515655 00:08:12.175 17:45:16 -- nvmf/common.sh@470 -- # waitforlisten 1515655 00:08:12.175 17:45:16 -- common/autotest_common.sh@819 -- # '[' -z 1515655 ']' 00:08:12.175 17:45:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.175 17:45:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.175 17:45:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:12.175 17:45:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.175 17:45:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:12.175 17:45:16 -- common/autotest_common.sh@10 -- # set +x 00:08:12.175 [2024-07-22 17:45:16.444839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:12.175 [2024-07-22 17:45:16.444902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.436 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.436 [2024-07-22 17:45:16.538259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.436 [2024-07-22 17:45:16.627757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.436 [2024-07-22 17:45:16.627922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.436 [2024-07-22 17:45:16.627932] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.436 [2024-07-22 17:45:16.627939] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.436 [2024-07-22 17:45:16.628081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.436 [2024-07-22 17:45:16.628212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.436 [2024-07-22 17:45:16.628347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.436 [2024-07-22 17:45:16.628361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.377 17:45:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:13.377 17:45:17 -- common/autotest_common.sh@852 -- # return 0 00:08:13.377 17:45:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:13.377 17:45:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 17:45:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.377 17:45:17 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:13.377 17:45:17 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:13.377 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 [2024-07-22 17:45:17.344626] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.377 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.377 17:45:17 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:13.377 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 Malloc1 00:08:13.377 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.377 17:45:17 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.377 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.377 17:45:17 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:13.377 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.377 17:45:17 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.377 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 [2024-07-22 17:45:17.466420] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.377 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.377 17:45:17 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:13.377 17:45:17 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:13.377 17:45:17 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:13.377 17:45:17 -- common/autotest_common.sh@1359 -- # local bs 00:08:13.377 17:45:17 -- common/autotest_common.sh@1360 -- # local nb 00:08:13.377 17:45:17 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:13.377 17:45:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.377 17:45:17 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 17:45:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.377 17:45:17 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:13.377 { 00:08:13.377 "name": "Malloc1", 00:08:13.377 "aliases": [ 00:08:13.377 "87ac174f-5417-449b-98af-cb955a6007c9" 00:08:13.377 ], 00:08:13.377 "product_name": "Malloc disk", 00:08:13.377 "block_size": 512, 00:08:13.377 "num_blocks": 1048576, 00:08:13.377 "uuid": "87ac174f-5417-449b-98af-cb955a6007c9", 00:08:13.377 "assigned_rate_limits": { 00:08:13.377 "rw_ios_per_sec": 0, 00:08:13.377 "rw_mbytes_per_sec": 0, 00:08:13.377 "r_mbytes_per_sec": 0, 00:08:13.377 "w_mbytes_per_sec": 0 00:08:13.377 }, 00:08:13.377 "claimed": true, 00:08:13.377 "claim_type": "exclusive_write", 00:08:13.377 "zoned": false, 00:08:13.377 "supported_io_types": { 00:08:13.377 "read": true, 00:08:13.377 "write": true, 00:08:13.377 "unmap": true, 00:08:13.377 "write_zeroes": true, 00:08:13.377 "flush": true, 00:08:13.377 "reset": true, 00:08:13.377 "compare": false, 00:08:13.377 "compare_and_write": false, 00:08:13.377 "abort": true, 00:08:13.377 "nvme_admin": false, 00:08:13.377 "nvme_io": false 00:08:13.377 }, 00:08:13.377 "memory_domains": [ 00:08:13.377 { 00:08:13.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.377 "dma_device_type": 2 00:08:13.377 } 00:08:13.377 ], 00:08:13.377 "driver_specific": {} 00:08:13.377 } 00:08:13.377 ]' 00:08:13.377 17:45:17 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:13.377 17:45:17 -- common/autotest_common.sh@1362 -- # bs=512 00:08:13.377 17:45:17 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:13.377 17:45:17 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:13.377 17:45:17 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:13.377 17:45:17 -- common/autotest_common.sh@1367 -- # echo 512 00:08:13.377 17:45:17 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:13.377 17:45:17 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.760 17:45:19 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:14.760 17:45:19 -- common/autotest_common.sh@1177 -- # local i=0 00:08:14.760 17:45:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:14.760 17:45:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:14.760 17:45:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:17.380 17:45:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:17.380 17:45:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:17.380 17:45:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.381 17:45:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:17.381 17:45:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.381 17:45:21 -- common/autotest_common.sh@1187 -- # return 0 00:08:17.381 17:45:21 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:17.381 17:45:21 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:17.381 17:45:21 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:17.381 17:45:21 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:17.381 17:45:21 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:17.381 17:45:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:17.381 17:45:21 -- setup/common.sh@80 -- # echo 536870912 00:08:17.381 17:45:21 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:17.381 17:45:21 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:17.381 17:45:21 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:17.381 17:45:21 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:17.381 17:45:21 -- target/filesystem.sh@69 -- # partprobe 00:08:17.641 17:45:21 -- target/filesystem.sh@70 -- # sleep 1 00:08:18.583 17:45:22 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:18.583 17:45:22 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:18.583 17:45:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.583 17:45:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.583 17:45:22 -- common/autotest_common.sh@10 -- # set +x 00:08:18.583 ************************************ 00:08:18.583 START TEST filesystem_ext4 00:08:18.583 ************************************ 00:08:18.583 17:45:22 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:18.583 17:45:22 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:18.583 17:45:22 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.583 17:45:22 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:18.583 17:45:22 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:18.583 17:45:22 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:18.583 17:45:22 -- common/autotest_common.sh@904 -- # local i=0 00:08:18.583 17:45:22 -- common/autotest_common.sh@905 -- # local force 00:08:18.583 17:45:22 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:18.583 17:45:22 -- common/autotest_common.sh@908 -- # force=-F 00:08:18.583 17:45:22 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:18.583 mke2fs 1.46.5 (30-Dec-2021) 00:08:18.583 Discarding device blocks: 0/522240 done 00:08:18.583 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:18.583 Filesystem UUID: 998411e8-926e-4fea-abd7-fec7603c26de 00:08:18.583 Superblock backups stored on blocks: 00:08:18.583 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:18.583 00:08:18.583 Allocating group tables: 0/64 done 00:08:18.583 Writing inode tables: 0/64 done 00:08:19.121 Creating journal (8192 blocks): done 00:08:19.121 Writing superblocks and filesystem accounting information: 0/64 done 00:08:19.121 00:08:19.121 17:45:23 -- common/autotest_common.sh@921 -- # return 0 00:08:19.121 17:45:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.381 17:45:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.381 17:45:23 -- target/filesystem.sh@25 -- # sync 00:08:19.381 17:45:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.381 17:45:23 -- target/filesystem.sh@27 -- # sync 00:08:19.381 17:45:23 -- target/filesystem.sh@29 -- # i=0 00:08:19.381 17:45:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.381 17:45:23 -- target/filesystem.sh@37 -- # kill -0 1515655 00:08:19.381 17:45:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.381 17:45:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.381 17:45:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.381 17:45:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.381 00:08:19.381 real 0m0.879s 00:08:19.381 user 0m0.025s 00:08:19.381 sys 0m0.046s 00:08:19.381 17:45:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.381 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.381 ************************************ 00:08:19.381 END TEST filesystem_ext4 00:08:19.381 ************************************ 00:08:19.381 17:45:23 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:19.381 17:45:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.381 17:45:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.381 17:45:23 -- common/autotest_common.sh@10 -- # set +x 00:08:19.381 ************************************ 00:08:19.381 START TEST filesystem_btrfs 00:08:19.381 ************************************ 00:08:19.381 17:45:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:19.381 17:45:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:19.381 17:45:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.381 17:45:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:19.381 17:45:23 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:19.381 17:45:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.381 17:45:23 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.381 17:45:23 -- common/autotest_common.sh@905 -- # local force 00:08:19.381 17:45:23 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:19.381 17:45:23 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.381 17:45:23 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:19.642 btrfs-progs v6.6.2 00:08:19.642 See https://btrfs.readthedocs.io for more information. 00:08:19.642 00:08:19.642 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:19.642 NOTE: several default settings have changed in version 5.15, please make sure 00:08:19.642 this does not affect your deployments: 00:08:19.642 - DUP for metadata (-m dup) 00:08:19.642 - enabled no-holes (-O no-holes) 00:08:19.642 - enabled free-space-tree (-R free-space-tree) 00:08:19.642 00:08:19.642 Label: (null) 00:08:19.642 UUID: 18e2d634-3a58-44b1-ad78-29fd26123cda 00:08:19.642 Node size: 16384 00:08:19.642 Sector size: 4096 00:08:19.642 Filesystem size: 510.00MiB 00:08:19.642 Block group profiles: 00:08:19.642 Data: single 8.00MiB 00:08:19.642 Metadata: DUP 32.00MiB 00:08:19.642 System: DUP 8.00MiB 00:08:19.642 SSD detected: yes 00:08:19.642 Zoned device: no 00:08:19.642 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:19.642 Runtime features: free-space-tree 00:08:19.642 Checksum: crc32c 00:08:19.642 Number of devices: 1 00:08:19.642 Devices: 00:08:19.642 ID SIZE PATH 00:08:19.642 1 510.00MiB /dev/nvme0n1p1 00:08:19.642 00:08:19.642 17:45:23 -- common/autotest_common.sh@921 -- # return 0 00:08:19.642 17:45:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.902 17:45:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.902 17:45:24 -- target/filesystem.sh@25 -- # sync 00:08:19.902 17:45:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.902 17:45:24 -- target/filesystem.sh@27 -- # sync 00:08:19.902 17:45:24 -- target/filesystem.sh@29 -- # i=0 00:08:19.902 17:45:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.902 17:45:24 -- target/filesystem.sh@37 -- # kill -0 1515655 00:08:19.902 17:45:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.902 17:45:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.902 17:45:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.902 17:45:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.902 00:08:19.902 real 0m0.506s 00:08:19.902 user 0m0.025s 00:08:19.902 sys 0m0.062s 00:08:19.902 17:45:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.902 17:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:19.902 ************************************ 00:08:19.902 END TEST filesystem_btrfs 00:08:19.902 ************************************ 00:08:19.902 17:45:24 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:19.902 17:45:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.902 17:45:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.902 17:45:24 -- common/autotest_common.sh@10 -- # set +x 00:08:19.902 ************************************ 00:08:19.902 START TEST filesystem_xfs 00:08:19.902 ************************************ 00:08:19.902 17:45:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:19.902 17:45:24 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:19.902 17:45:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:19.902 17:45:24 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:19.902 17:45:24 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:19.902 17:45:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:19.902 17:45:24 -- common/autotest_common.sh@904 -- # local i=0 00:08:19.902 17:45:24 -- common/autotest_common.sh@905 -- # local force 00:08:19.902 17:45:24 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:19.902 17:45:24 -- common/autotest_common.sh@910 -- # force=-f 00:08:19.902 17:45:24 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:20.162 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:20.162 = sectsz=512 attr=2, projid32bit=1 00:08:20.162 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:20.162 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:20.162 data = bsize=4096 blocks=130560, imaxpct=25 00:08:20.162 = sunit=0 swidth=0 blks 00:08:20.162 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:20.162 log =internal log bsize=4096 blocks=16384, version=2 00:08:20.162 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:20.162 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.734 Discarding blocks...Done. 00:08:20.734 17:45:24 -- common/autotest_common.sh@921 -- # return 0 00:08:20.734 17:45:24 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:22.646 17:45:26 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:22.646 17:45:26 -- target/filesystem.sh@25 -- # sync 00:08:22.646 17:45:26 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:22.646 17:45:26 -- target/filesystem.sh@27 -- # sync 00:08:22.646 17:45:26 -- target/filesystem.sh@29 -- # i=0 00:08:22.646 17:45:26 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:22.646 17:45:26 -- target/filesystem.sh@37 -- # kill -0 1515655 00:08:22.646 17:45:26 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:22.646 17:45:26 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:22.646 17:45:26 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:22.646 17:45:26 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:22.646 00:08:22.646 real 0m2.718s 00:08:22.646 user 0m0.021s 00:08:22.646 sys 0m0.056s 00:08:22.646 17:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.646 17:45:26 -- common/autotest_common.sh@10 -- # set +x 00:08:22.646 ************************************ 00:08:22.646 END TEST filesystem_xfs 00:08:22.646 ************************************ 00:08:22.646 17:45:26 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:23.216 17:45:27 -- target/filesystem.sh@93 -- # sync 00:08:23.216 17:45:27 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.216 17:45:27 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.216 17:45:27 -- common/autotest_common.sh@1198 -- # local i=0 00:08:23.217 17:45:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:23.217 17:45:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.217 17:45:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:23.217 17:45:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.217 17:45:27 -- common/autotest_common.sh@1210 -- # return 0 00:08:23.217 17:45:27 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.217 17:45:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.217 17:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.217 17:45:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.217 17:45:27 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:23.217 17:45:27 -- target/filesystem.sh@101 -- # killprocess 1515655 00:08:23.217 17:45:27 -- common/autotest_common.sh@926 -- # '[' -z 1515655 ']' 00:08:23.217 17:45:27 -- common/autotest_common.sh@930 -- # kill -0 1515655 00:08:23.217 17:45:27 -- common/autotest_common.sh@931 -- # uname 00:08:23.217 17:45:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:23.217 17:45:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1515655 00:08:23.217 17:45:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:23.217 17:45:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:23.217 17:45:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1515655' 00:08:23.217 killing process with pid 1515655 00:08:23.217 17:45:27 -- common/autotest_common.sh@945 -- # kill 1515655 00:08:23.217 17:45:27 -- common/autotest_common.sh@950 -- # wait 1515655 00:08:23.477 17:45:27 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:23.477 00:08:23.477 real 0m11.177s 00:08:23.477 user 0m43.870s 00:08:23.477 sys 0m1.025s 00:08:23.477 17:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.477 17:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 ************************************ 00:08:23.477 END TEST nvmf_filesystem_no_in_capsule 00:08:23.477 ************************************ 00:08:23.477 17:45:27 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:23.477 17:45:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:23.477 17:45:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.477 17:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 ************************************ 00:08:23.477 START TEST nvmf_filesystem_in_capsule 00:08:23.477 ************************************ 00:08:23.477 17:45:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:23.477 17:45:27 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:23.477 17:45:27 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:23.477 17:45:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:23.477 17:45:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:23.477 17:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 17:45:27 -- nvmf/common.sh@469 -- # nvmfpid=1517832 00:08:23.477 17:45:27 -- nvmf/common.sh@470 -- # waitforlisten 1517832 00:08:23.477 17:45:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.477 17:45:27 -- common/autotest_common.sh@819 -- # '[' -z 1517832 ']' 00:08:23.477 17:45:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.477 17:45:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:23.477 17:45:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.477 17:45:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:23.477 17:45:27 -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 [2024-07-22 17:45:27.668006] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:23.477 [2024-07-22 17:45:27.668062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.477 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.737 [2024-07-22 17:45:27.753354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.737 [2024-07-22 17:45:27.814628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:23.737 [2024-07-22 17:45:27.814759] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.737 [2024-07-22 17:45:27.814769] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.737 [2024-07-22 17:45:27.814781] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.737 [2024-07-22 17:45:27.814904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.737 [2024-07-22 17:45:27.815010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.737 [2024-07-22 17:45:27.815128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.737 [2024-07-22 17:45:27.815131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.309 17:45:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.309 17:45:28 -- common/autotest_common.sh@852 -- # return 0 00:08:24.309 17:45:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:24.310 17:45:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:24.310 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.310 17:45:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.310 17:45:28 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:24.310 17:45:28 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:24.310 17:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.310 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.310 [2024-07-22 17:45:28.552812] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.310 17:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.310 17:45:28 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:24.310 17:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.310 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.570 Malloc1 00:08:24.570 17:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.570 17:45:28 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:24.570 17:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.570 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.570 17:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.570 17:45:28 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.570 17:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.570 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.570 17:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.570 17:45:28 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.571 17:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.571 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.571 [2024-07-22 17:45:28.674413] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.571 17:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.571 17:45:28 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:24.571 17:45:28 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:24.571 17:45:28 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:24.571 17:45:28 -- common/autotest_common.sh@1359 -- # local bs 00:08:24.571 17:45:28 -- common/autotest_common.sh@1360 -- # local nb 00:08:24.571 17:45:28 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:24.571 17:45:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.571 17:45:28 -- common/autotest_common.sh@10 -- # set +x 00:08:24.571 17:45:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.571 17:45:28 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:24.571 { 00:08:24.571 "name": "Malloc1", 00:08:24.571 "aliases": [ 00:08:24.571 "4f04dd81-30c8-4a69-ba27-ede361e2a58a" 00:08:24.571 ], 00:08:24.571 "product_name": "Malloc disk", 00:08:24.571 "block_size": 512, 00:08:24.571 "num_blocks": 1048576, 00:08:24.571 "uuid": "4f04dd81-30c8-4a69-ba27-ede361e2a58a", 00:08:24.571 "assigned_rate_limits": { 00:08:24.571 "rw_ios_per_sec": 0, 00:08:24.571 "rw_mbytes_per_sec": 0, 00:08:24.571 "r_mbytes_per_sec": 0, 00:08:24.571 "w_mbytes_per_sec": 0 00:08:24.571 }, 00:08:24.571 "claimed": true, 00:08:24.571 "claim_type": "exclusive_write", 00:08:24.571 "zoned": false, 00:08:24.571 "supported_io_types": { 00:08:24.571 "read": true, 00:08:24.571 "write": true, 00:08:24.571 "unmap": true, 00:08:24.571 "write_zeroes": true, 00:08:24.571 "flush": true, 00:08:24.571 "reset": true, 00:08:24.571 "compare": false, 00:08:24.571 "compare_and_write": false, 00:08:24.571 "abort": true, 00:08:24.571 "nvme_admin": false, 00:08:24.571 "nvme_io": false 00:08:24.571 }, 00:08:24.571 "memory_domains": [ 00:08:24.571 { 00:08:24.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.571 "dma_device_type": 2 00:08:24.571 } 00:08:24.571 ], 00:08:24.571 "driver_specific": {} 00:08:24.571 } 00:08:24.571 ]' 00:08:24.571 17:45:28 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:24.571 17:45:28 -- common/autotest_common.sh@1362 -- # bs=512 00:08:24.571 17:45:28 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:24.571 17:45:28 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:24.571 17:45:28 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:24.571 17:45:28 -- common/autotest_common.sh@1367 -- # echo 512 00:08:24.571 17:45:28 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:24.571 17:45:28 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:26.482 17:45:30 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.482 17:45:30 -- common/autotest_common.sh@1177 -- # local i=0 00:08:26.482 17:45:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.482 17:45:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:26.482 17:45:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:28.400 17:45:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:28.400 17:45:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:28.400 17:45:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.400 17:45:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:28.400 17:45:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.400 17:45:32 -- common/autotest_common.sh@1187 -- # return 0 00:08:28.400 17:45:32 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:28.400 17:45:32 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:28.400 17:45:32 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:28.400 17:45:32 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:28.400 17:45:32 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:28.400 17:45:32 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:28.400 17:45:32 -- setup/common.sh@80 -- # echo 536870912 00:08:28.400 17:45:32 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:28.400 17:45:32 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:28.400 17:45:32 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:28.400 17:45:32 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:28.400 17:45:32 -- target/filesystem.sh@69 -- # partprobe 00:08:28.662 17:45:32 -- target/filesystem.sh@70 -- # sleep 1 00:08:29.615 17:45:33 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:29.615 17:45:33 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:29.616 17:45:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:29.616 17:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.616 17:45:33 -- common/autotest_common.sh@10 -- # set +x 00:08:29.876 ************************************ 00:08:29.876 START TEST filesystem_in_capsule_ext4 00:08:29.876 ************************************ 00:08:29.876 17:45:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:29.876 17:45:33 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:29.876 17:45:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.876 17:45:33 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:29.876 17:45:33 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:29.876 17:45:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:29.876 17:45:33 -- common/autotest_common.sh@904 -- # local i=0 00:08:29.876 17:45:33 -- common/autotest_common.sh@905 -- # local force 00:08:29.876 17:45:33 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:29.876 17:45:33 -- common/autotest_common.sh@908 -- # force=-F 00:08:29.876 17:45:33 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:29.876 mke2fs 1.46.5 (30-Dec-2021) 00:08:29.876 Discarding device blocks: 0/522240 done 00:08:29.876 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:29.876 Filesystem UUID: d90921dc-55ca-4fb1-ad2a-4c3823c1462a 00:08:29.876 Superblock backups stored on blocks: 00:08:29.876 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:29.876 00:08:29.876 Allocating group tables: 0/64 done 00:08:29.876 Writing inode tables: 0/64 done 00:08:33.174 Creating journal (8192 blocks): done 00:08:33.174 Writing superblocks and filesystem accounting information: 0/64 done 00:08:33.174 00:08:33.174 17:45:36 -- common/autotest_common.sh@921 -- # return 0 00:08:33.174 17:45:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.174 17:45:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.174 17:45:37 -- target/filesystem.sh@25 -- # sync 00:08:33.174 17:45:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.174 17:45:37 -- target/filesystem.sh@27 -- # sync 00:08:33.174 17:45:37 -- target/filesystem.sh@29 -- # i=0 00:08:33.174 17:45:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.174 17:45:37 -- target/filesystem.sh@37 -- # kill -0 1517832 00:08:33.174 17:45:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.174 17:45:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.174 17:45:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.174 17:45:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.174 00:08:33.174 real 0m3.158s 00:08:33.174 user 0m0.028s 00:08:33.174 sys 0m0.046s 00:08:33.174 17:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.174 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:33.174 ************************************ 00:08:33.174 END TEST filesystem_in_capsule_ext4 00:08:33.174 ************************************ 00:08:33.174 17:45:37 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:33.174 17:45:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:33.174 17:45:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.174 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:33.174 ************************************ 00:08:33.174 START TEST filesystem_in_capsule_btrfs 00:08:33.174 ************************************ 00:08:33.174 17:45:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:33.174 17:45:37 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:33.174 17:45:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.174 17:45:37 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:33.174 17:45:37 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:33.174 17:45:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:33.174 17:45:37 -- common/autotest_common.sh@904 -- # local i=0 00:08:33.174 17:45:37 -- common/autotest_common.sh@905 -- # local force 00:08:33.174 17:45:37 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:33.174 17:45:37 -- common/autotest_common.sh@910 -- # force=-f 00:08:33.174 17:45:37 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:33.174 btrfs-progs v6.6.2 00:08:33.174 See https://btrfs.readthedocs.io for more information. 00:08:33.174 00:08:33.174 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:33.174 NOTE: several default settings have changed in version 5.15, please make sure 00:08:33.174 this does not affect your deployments: 00:08:33.174 - DUP for metadata (-m dup) 00:08:33.174 - enabled no-holes (-O no-holes) 00:08:33.174 - enabled free-space-tree (-R free-space-tree) 00:08:33.174 00:08:33.174 Label: (null) 00:08:33.174 UUID: 77758204-788f-44e0-bb44-f602142deacd 00:08:33.174 Node size: 16384 00:08:33.174 Sector size: 4096 00:08:33.174 Filesystem size: 510.00MiB 00:08:33.174 Block group profiles: 00:08:33.174 Data: single 8.00MiB 00:08:33.174 Metadata: DUP 32.00MiB 00:08:33.174 System: DUP 8.00MiB 00:08:33.174 SSD detected: yes 00:08:33.174 Zoned device: no 00:08:33.174 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:33.174 Runtime features: free-space-tree 00:08:33.174 Checksum: crc32c 00:08:33.174 Number of devices: 1 00:08:33.174 Devices: 00:08:33.174 ID SIZE PATH 00:08:33.174 1 510.00MiB /dev/nvme0n1p1 00:08:33.174 00:08:33.174 17:45:37 -- common/autotest_common.sh@921 -- # return 0 00:08:33.174 17:45:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.744 17:45:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.745 17:45:37 -- target/filesystem.sh@25 -- # sync 00:08:33.745 17:45:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.745 17:45:37 -- target/filesystem.sh@27 -- # sync 00:08:33.745 17:45:37 -- target/filesystem.sh@29 -- # i=0 00:08:33.745 17:45:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.745 17:45:37 -- target/filesystem.sh@37 -- # kill -0 1517832 00:08:33.745 17:45:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.745 17:45:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.745 17:45:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.745 17:45:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.745 00:08:33.745 real 0m0.663s 00:08:33.745 user 0m0.024s 00:08:33.745 sys 0m0.060s 00:08:33.745 17:45:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.745 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:33.745 ************************************ 00:08:33.745 END TEST filesystem_in_capsule_btrfs 00:08:33.745 ************************************ 00:08:33.745 17:45:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:33.745 17:45:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:33.745 17:45:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.745 17:45:37 -- common/autotest_common.sh@10 -- # set +x 00:08:33.745 ************************************ 00:08:33.745 START TEST filesystem_in_capsule_xfs 00:08:33.745 ************************************ 00:08:33.745 17:45:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:33.745 17:45:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:33.745 17:45:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:33.745 17:45:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:33.745 17:45:37 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:33.745 17:45:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:33.745 17:45:37 -- common/autotest_common.sh@904 -- # local i=0 00:08:33.745 17:45:37 -- common/autotest_common.sh@905 -- # local force 00:08:33.745 17:45:37 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:33.745 17:45:37 -- common/autotest_common.sh@910 -- # force=-f 00:08:33.745 17:45:37 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:33.745 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:33.745 = sectsz=512 attr=2, projid32bit=1 00:08:33.745 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:33.745 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:33.745 data = bsize=4096 blocks=130560, imaxpct=25 00:08:33.745 = sunit=0 swidth=0 blks 00:08:33.745 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:33.745 log =internal log bsize=4096 blocks=16384, version=2 00:08:33.745 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:33.745 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:34.687 Discarding blocks...Done. 00:08:34.687 17:45:38 -- common/autotest_common.sh@921 -- # return 0 00:08:34.687 17:45:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.599 17:45:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.599 17:45:40 -- target/filesystem.sh@25 -- # sync 00:08:36.599 17:45:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.599 17:45:40 -- target/filesystem.sh@27 -- # sync 00:08:36.599 17:45:40 -- target/filesystem.sh@29 -- # i=0 00:08:36.599 17:45:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.599 17:45:40 -- target/filesystem.sh@37 -- # kill -0 1517832 00:08:36.599 17:45:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.599 17:45:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.599 17:45:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.599 17:45:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.599 00:08:36.599 real 0m2.737s 00:08:36.599 user 0m0.026s 00:08:36.599 sys 0m0.050s 00:08:36.599 17:45:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.599 17:45:40 -- common/autotest_common.sh@10 -- # set +x 00:08:36.599 ************************************ 00:08:36.599 END TEST filesystem_in_capsule_xfs 00:08:36.599 ************************************ 00:08:36.599 17:45:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:36.599 17:45:40 -- target/filesystem.sh@93 -- # sync 00:08:36.599 17:45:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:36.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.599 17:45:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:36.599 17:45:40 -- common/autotest_common.sh@1198 -- # local i=0 00:08:36.599 17:45:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:36.599 17:45:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.599 17:45:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:36.600 17:45:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.600 17:45:40 -- common/autotest_common.sh@1210 -- # return 0 00:08:36.600 17:45:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.600 17:45:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:36.600 17:45:40 -- common/autotest_common.sh@10 -- # set +x 00:08:36.600 17:45:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:36.600 17:45:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:36.600 17:45:40 -- target/filesystem.sh@101 -- # killprocess 1517832 00:08:36.600 17:45:40 -- common/autotest_common.sh@926 -- # '[' -z 1517832 ']' 00:08:36.600 17:45:40 -- common/autotest_common.sh@930 -- # kill -0 1517832 00:08:36.600 17:45:40 -- common/autotest_common.sh@931 -- # uname 00:08:36.600 17:45:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:36.600 17:45:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1517832 00:08:36.600 17:45:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:36.600 17:45:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:36.600 17:45:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1517832' 00:08:36.600 killing process with pid 1517832 00:08:36.600 17:45:40 -- common/autotest_common.sh@945 -- # kill 1517832 00:08:36.600 17:45:40 -- common/autotest_common.sh@950 -- # wait 1517832 00:08:36.860 17:45:41 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:36.860 00:08:36.860 real 0m13.473s 00:08:36.860 user 0m53.135s 00:08:36.860 sys 0m0.991s 00:08:36.860 17:45:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.860 17:45:41 -- common/autotest_common.sh@10 -- # set +x 00:08:36.860 ************************************ 00:08:36.860 END TEST nvmf_filesystem_in_capsule 00:08:36.860 ************************************ 00:08:36.860 17:45:41 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:36.860 17:45:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:36.860 17:45:41 -- nvmf/common.sh@116 -- # sync 00:08:36.860 17:45:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:36.860 17:45:41 -- nvmf/common.sh@119 -- # set +e 00:08:36.860 17:45:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:36.860 17:45:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:36.860 rmmod nvme_tcp 00:08:37.121 rmmod nvme_fabrics 00:08:37.121 rmmod nvme_keyring 00:08:37.121 17:45:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:37.121 17:45:41 -- nvmf/common.sh@123 -- # set -e 00:08:37.121 17:45:41 -- nvmf/common.sh@124 -- # return 0 00:08:37.121 17:45:41 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:37.121 17:45:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:37.121 17:45:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:37.121 17:45:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:37.121 17:45:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:37.121 17:45:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:37.121 17:45:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.121 17:45:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.121 17:45:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.031 17:45:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:39.031 00:08:39.031 real 0m35.402s 00:08:39.031 user 1m39.391s 00:08:39.031 sys 0m8.322s 00:08:39.031 17:45:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.031 17:45:43 -- common/autotest_common.sh@10 -- # set +x 00:08:39.031 ************************************ 00:08:39.031 END TEST nvmf_filesystem 00:08:39.031 ************************************ 00:08:39.031 17:45:43 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:39.031 17:45:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:39.031 17:45:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.031 17:45:43 -- common/autotest_common.sh@10 -- # set +x 00:08:39.031 ************************************ 00:08:39.031 START TEST nvmf_discovery 00:08:39.031 ************************************ 00:08:39.031 17:45:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:39.291 * Looking for test storage... 00:08:39.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.291 17:45:43 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.291 17:45:43 -- nvmf/common.sh@7 -- # uname -s 00:08:39.292 17:45:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.292 17:45:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.292 17:45:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.292 17:45:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.292 17:45:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.292 17:45:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.292 17:45:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.292 17:45:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.292 17:45:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.292 17:45:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.292 17:45:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:39.292 17:45:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:39.292 17:45:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.292 17:45:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.292 17:45:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.292 17:45:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.292 17:45:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.292 17:45:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.292 17:45:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.292 17:45:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.292 17:45:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.292 17:45:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.292 17:45:43 -- paths/export.sh@5 -- # export PATH 00:08:39.292 17:45:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.292 17:45:43 -- nvmf/common.sh@46 -- # : 0 00:08:39.292 17:45:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:39.292 17:45:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:39.292 17:45:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:39.292 17:45:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.292 17:45:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.292 17:45:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:39.292 17:45:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:39.292 17:45:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:39.292 17:45:43 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:39.292 17:45:43 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:39.292 17:45:43 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:39.292 17:45:43 -- target/discovery.sh@15 -- # hash nvme 00:08:39.292 17:45:43 -- target/discovery.sh@20 -- # nvmftestinit 00:08:39.292 17:45:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:39.292 17:45:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.292 17:45:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:39.292 17:45:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:39.292 17:45:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:39.292 17:45:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.292 17:45:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.292 17:45:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.292 17:45:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:39.292 17:45:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:39.292 17:45:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:39.292 17:45:43 -- common/autotest_common.sh@10 -- # set +x 00:08:47.433 17:45:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:47.433 17:45:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:47.433 17:45:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:47.433 17:45:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:47.433 17:45:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:47.433 17:45:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:47.433 17:45:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:47.433 17:45:51 -- nvmf/common.sh@294 -- # net_devs=() 00:08:47.433 17:45:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:47.433 17:45:51 -- nvmf/common.sh@295 -- # e810=() 00:08:47.433 17:45:51 -- nvmf/common.sh@295 -- # local -ga e810 00:08:47.433 17:45:51 -- nvmf/common.sh@296 -- # x722=() 00:08:47.433 17:45:51 -- nvmf/common.sh@296 -- # local -ga x722 00:08:47.433 17:45:51 -- nvmf/common.sh@297 -- # mlx=() 00:08:47.433 17:45:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:47.433 17:45:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.433 17:45:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:47.433 17:45:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:47.433 17:45:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:47.433 17:45:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:47.433 17:45:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:47.433 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:47.433 17:45:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:47.433 17:45:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:47.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:47.433 17:45:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:47.433 17:45:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:47.433 17:45:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.433 17:45:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:47.433 17:45:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.433 17:45:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:47.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:47.433 17:45:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.433 17:45:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:47.433 17:45:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.433 17:45:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:47.433 17:45:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.433 17:45:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:47.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:47.433 17:45:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.433 17:45:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:47.433 17:45:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:47.433 17:45:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:47.433 17:45:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.433 17:45:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.433 17:45:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.433 17:45:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:47.433 17:45:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.433 17:45:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.433 17:45:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:47.433 17:45:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.433 17:45:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.433 17:45:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:47.433 17:45:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:47.433 17:45:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.433 17:45:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.433 17:45:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.433 17:45:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.433 17:45:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:47.433 17:45:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.433 17:45:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.433 17:45:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.433 17:45:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:47.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:08:47.433 00:08:47.433 --- 10.0.0.2 ping statistics --- 00:08:47.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.433 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:08:47.433 17:45:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:08:47.433 00:08:47.433 --- 10.0.0.1 ping statistics --- 00:08:47.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.433 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:08:47.433 17:45:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.433 17:45:51 -- nvmf/common.sh@410 -- # return 0 00:08:47.433 17:45:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:47.433 17:45:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.433 17:45:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:47.433 17:45:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.433 17:45:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:47.433 17:45:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:47.433 17:45:51 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:47.433 17:45:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:47.433 17:45:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:47.433 17:45:51 -- common/autotest_common.sh@10 -- # set +x 00:08:47.433 17:45:51 -- nvmf/common.sh@469 -- # nvmfpid=1524534 00:08:47.433 17:45:51 -- nvmf/common.sh@470 -- # waitforlisten 1524534 00:08:47.433 17:45:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:47.433 17:45:51 -- common/autotest_common.sh@819 -- # '[' -z 1524534 ']' 00:08:47.433 17:45:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.433 17:45:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:47.433 17:45:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.433 17:45:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:47.433 17:45:51 -- common/autotest_common.sh@10 -- # set +x 00:08:47.433 [2024-07-22 17:45:51.475839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:47.433 [2024-07-22 17:45:51.475901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.434 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.434 [2024-07-22 17:45:51.567504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.434 [2024-07-22 17:45:51.658157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:47.434 [2024-07-22 17:45:51.658311] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.434 [2024-07-22 17:45:51.658321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.434 [2024-07-22 17:45:51.658329] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.434 [2024-07-22 17:45:51.658417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.434 [2024-07-22 17:45:51.658596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.434 [2024-07-22 17:45:51.658725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.434 [2024-07-22 17:45:51.658729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.374 17:45:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:48.374 17:45:52 -- common/autotest_common.sh@852 -- # return 0 00:08:48.374 17:45:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:48.374 17:45:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.374 17:45:52 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 [2024-07-22 17:45:52.373517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@26 -- # seq 1 4 00:08:48.374 17:45:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.374 17:45:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 Null1 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 [2024-07-22 17:45:52.427157] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.374 17:45:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 Null2 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.374 17:45:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 Null3 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:48.374 17:45:52 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 Null4 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:48.374 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.374 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.374 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.374 17:45:52 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:08:48.635 00:08:48.635 Discovery Log Number of Records 6, Generation counter 6 00:08:48.635 =====Discovery Log Entry 0====== 00:08:48.635 trtype: tcp 00:08:48.635 adrfam: ipv4 00:08:48.635 subtype: current discovery subsystem 00:08:48.635 treq: not required 00:08:48.635 portid: 0 00:08:48.635 trsvcid: 4420 00:08:48.635 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:48.635 traddr: 10.0.0.2 00:08:48.635 eflags: explicit discovery connections, duplicate discovery information 00:08:48.635 sectype: none 00:08:48.635 =====Discovery Log Entry 1====== 00:08:48.635 trtype: tcp 00:08:48.635 adrfam: ipv4 00:08:48.635 subtype: nvme subsystem 00:08:48.635 treq: not required 00:08:48.635 portid: 0 00:08:48.635 trsvcid: 4420 00:08:48.635 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:48.635 traddr: 10.0.0.2 00:08:48.635 eflags: none 00:08:48.635 sectype: none 00:08:48.635 =====Discovery Log Entry 2====== 00:08:48.635 trtype: tcp 00:08:48.635 adrfam: ipv4 00:08:48.635 subtype: nvme subsystem 00:08:48.635 treq: not required 00:08:48.635 portid: 0 00:08:48.635 trsvcid: 4420 00:08:48.635 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:48.635 traddr: 10.0.0.2 00:08:48.635 eflags: none 00:08:48.635 sectype: none 00:08:48.635 =====Discovery Log Entry 3====== 00:08:48.635 trtype: tcp 00:08:48.635 adrfam: ipv4 00:08:48.635 subtype: nvme subsystem 00:08:48.635 treq: not required 00:08:48.635 portid: 0 00:08:48.635 trsvcid: 4420 00:08:48.635 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:48.635 traddr: 10.0.0.2 00:08:48.635 eflags: none 00:08:48.635 sectype: none 00:08:48.635 =====Discovery Log Entry 4====== 00:08:48.635 trtype: tcp 00:08:48.635 adrfam: ipv4 00:08:48.635 subtype: nvme subsystem 00:08:48.635 treq: not required 00:08:48.635 portid: 0 00:08:48.635 trsvcid: 4420 00:08:48.635 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:48.635 traddr: 10.0.0.2 00:08:48.635 eflags: none 00:08:48.635 sectype: none 00:08:48.635 =====Discovery Log Entry 5====== 00:08:48.635 trtype: tcp 00:08:48.635 adrfam: ipv4 00:08:48.635 subtype: discovery subsystem referral 00:08:48.635 treq: not required 00:08:48.635 portid: 0 00:08:48.635 trsvcid: 4430 00:08:48.635 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:48.635 traddr: 10.0.0.2 00:08:48.635 eflags: none 00:08:48.635 sectype: none 00:08:48.635 17:45:52 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:48.635 Perform nvmf subsystem discovery via RPC 00:08:48.635 17:45:52 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:48.635 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.635 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.635 [2024-07-22 17:45:52.699907] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:48.635 [ 00:08:48.635 { 00:08:48.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:48.635 "subtype": "Discovery", 00:08:48.635 "listen_addresses": [ 00:08:48.635 { 00:08:48.635 "transport": "TCP", 00:08:48.635 "trtype": "TCP", 00:08:48.635 "adrfam": "IPv4", 00:08:48.635 "traddr": "10.0.0.2", 00:08:48.635 "trsvcid": "4420" 00:08:48.635 } 00:08:48.635 ], 00:08:48.635 "allow_any_host": true, 00:08:48.635 "hosts": [] 00:08:48.635 }, 00:08:48.635 { 00:08:48.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.635 "subtype": "NVMe", 00:08:48.635 "listen_addresses": [ 00:08:48.635 { 00:08:48.635 "transport": "TCP", 00:08:48.635 "trtype": "TCP", 00:08:48.635 "adrfam": "IPv4", 00:08:48.635 "traddr": "10.0.0.2", 00:08:48.635 "trsvcid": "4420" 00:08:48.635 } 00:08:48.635 ], 00:08:48.635 "allow_any_host": true, 00:08:48.635 "hosts": [], 00:08:48.635 "serial_number": "SPDK00000000000001", 00:08:48.635 "model_number": "SPDK bdev Controller", 00:08:48.635 "max_namespaces": 32, 00:08:48.635 "min_cntlid": 1, 00:08:48.635 "max_cntlid": 65519, 00:08:48.635 "namespaces": [ 00:08:48.635 { 00:08:48.635 "nsid": 1, 00:08:48.635 "bdev_name": "Null1", 00:08:48.635 "name": "Null1", 00:08:48.635 "nguid": "461BDC2C38294CD79BFAC40D8FA6D768", 00:08:48.635 "uuid": "461bdc2c-3829-4cd7-9bfa-c40d8fa6d768" 00:08:48.635 } 00:08:48.635 ] 00:08:48.635 }, 00:08:48.635 { 00:08:48.635 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:48.635 "subtype": "NVMe", 00:08:48.635 "listen_addresses": [ 00:08:48.635 { 00:08:48.635 "transport": "TCP", 00:08:48.635 "trtype": "TCP", 00:08:48.635 "adrfam": "IPv4", 00:08:48.635 "traddr": "10.0.0.2", 00:08:48.635 "trsvcid": "4420" 00:08:48.635 } 00:08:48.635 ], 00:08:48.635 "allow_any_host": true, 00:08:48.635 "hosts": [], 00:08:48.635 "serial_number": "SPDK00000000000002", 00:08:48.635 "model_number": "SPDK bdev Controller", 00:08:48.635 "max_namespaces": 32, 00:08:48.635 "min_cntlid": 1, 00:08:48.635 "max_cntlid": 65519, 00:08:48.635 "namespaces": [ 00:08:48.635 { 00:08:48.635 "nsid": 1, 00:08:48.635 "bdev_name": "Null2", 00:08:48.635 "name": "Null2", 00:08:48.635 "nguid": "5311D8FCB5E14A909858690B493D5F17", 00:08:48.635 "uuid": "5311d8fc-b5e1-4a90-9858-690b493d5f17" 00:08:48.635 } 00:08:48.635 ] 00:08:48.635 }, 00:08:48.635 { 00:08:48.635 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:48.635 "subtype": "NVMe", 00:08:48.635 "listen_addresses": [ 00:08:48.635 { 00:08:48.636 "transport": "TCP", 00:08:48.636 "trtype": "TCP", 00:08:48.636 "adrfam": "IPv4", 00:08:48.636 "traddr": "10.0.0.2", 00:08:48.636 "trsvcid": "4420" 00:08:48.636 } 00:08:48.636 ], 00:08:48.636 "allow_any_host": true, 00:08:48.636 "hosts": [], 00:08:48.636 "serial_number": "SPDK00000000000003", 00:08:48.636 "model_number": "SPDK bdev Controller", 00:08:48.636 "max_namespaces": 32, 00:08:48.636 "min_cntlid": 1, 00:08:48.636 "max_cntlid": 65519, 00:08:48.636 "namespaces": [ 00:08:48.636 { 00:08:48.636 "nsid": 1, 00:08:48.636 "bdev_name": "Null3", 00:08:48.636 "name": "Null3", 00:08:48.636 "nguid": "408FBD9939AC4A5AB6EBF24BE74E9E5D", 00:08:48.636 "uuid": "408fbd99-39ac-4a5a-b6eb-f24be74e9e5d" 00:08:48.636 } 00:08:48.636 ] 00:08:48.636 }, 00:08:48.636 { 00:08:48.636 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:48.636 "subtype": "NVMe", 00:08:48.636 "listen_addresses": [ 00:08:48.636 { 00:08:48.636 "transport": "TCP", 00:08:48.636 "trtype": "TCP", 00:08:48.636 "adrfam": "IPv4", 00:08:48.636 "traddr": "10.0.0.2", 00:08:48.636 "trsvcid": "4420" 00:08:48.636 } 00:08:48.636 ], 00:08:48.636 "allow_any_host": true, 00:08:48.636 "hosts": [], 00:08:48.636 "serial_number": "SPDK00000000000004", 00:08:48.636 "model_number": "SPDK bdev Controller", 00:08:48.636 "max_namespaces": 32, 00:08:48.636 "min_cntlid": 1, 00:08:48.636 "max_cntlid": 65519, 00:08:48.636 "namespaces": [ 00:08:48.636 { 00:08:48.636 "nsid": 1, 00:08:48.636 "bdev_name": "Null4", 00:08:48.636 "name": "Null4", 00:08:48.636 "nguid": "C9F77910038D441CAD0BB1DFFDEA0D8F", 00:08:48.636 "uuid": "c9f77910-038d-441c-ad0b-b1dffdea0d8f" 00:08:48.636 } 00:08:48.636 ] 00:08:48.636 } 00:08:48.636 ] 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@42 -- # seq 1 4 00:08:48.636 17:45:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.636 17:45:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.636 17:45:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.636 17:45:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.636 17:45:52 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:48.636 17:45:52 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:48.636 17:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.636 17:45:52 -- common/autotest_common.sh@10 -- # set +x 00:08:48.636 17:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.636 17:45:52 -- target/discovery.sh@49 -- # check_bdevs= 00:08:48.636 17:45:52 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:48.636 17:45:52 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:48.636 17:45:52 -- target/discovery.sh@57 -- # nvmftestfini 00:08:48.636 17:45:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:48.636 17:45:52 -- nvmf/common.sh@116 -- # sync 00:08:48.636 17:45:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:48.636 17:45:52 -- nvmf/common.sh@119 -- # set +e 00:08:48.636 17:45:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:48.636 17:45:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:48.636 rmmod nvme_tcp 00:08:48.636 rmmod nvme_fabrics 00:08:48.636 rmmod nvme_keyring 00:08:48.899 17:45:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:48.899 17:45:52 -- nvmf/common.sh@123 -- # set -e 00:08:48.899 17:45:52 -- nvmf/common.sh@124 -- # return 0 00:08:48.899 17:45:52 -- nvmf/common.sh@477 -- # '[' -n 1524534 ']' 00:08:48.899 17:45:52 -- nvmf/common.sh@478 -- # killprocess 1524534 00:08:48.899 17:45:52 -- common/autotest_common.sh@926 -- # '[' -z 1524534 ']' 00:08:48.899 17:45:52 -- common/autotest_common.sh@930 -- # kill -0 1524534 00:08:48.899 17:45:52 -- common/autotest_common.sh@931 -- # uname 00:08:48.899 17:45:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:48.899 17:45:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1524534 00:08:48.899 17:45:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:48.899 17:45:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:48.899 17:45:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1524534' 00:08:48.899 killing process with pid 1524534 00:08:48.899 17:45:52 -- common/autotest_common.sh@945 -- # kill 1524534 00:08:48.899 [2024-07-22 17:45:52.987804] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:48.899 17:45:52 -- common/autotest_common.sh@950 -- # wait 1524534 00:08:48.899 17:45:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.899 17:45:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.899 17:45:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.899 17:45:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.899 17:45:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.899 17:45:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.899 17:45:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.899 17:45:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.471 17:45:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:51.471 00:08:51.471 real 0m11.889s 00:08:51.471 user 0m8.345s 00:08:51.471 sys 0m6.261s 00:08:51.471 17:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.471 17:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:51.471 ************************************ 00:08:51.471 END TEST nvmf_discovery 00:08:51.471 ************************************ 00:08:51.471 17:45:55 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:51.471 17:45:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:51.471 17:45:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:51.471 17:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:51.471 ************************************ 00:08:51.471 START TEST nvmf_referrals 00:08:51.471 ************************************ 00:08:51.471 17:45:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:51.471 * Looking for test storage... 00:08:51.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.471 17:45:55 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.471 17:45:55 -- nvmf/common.sh@7 -- # uname -s 00:08:51.472 17:45:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.472 17:45:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.472 17:45:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.472 17:45:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.472 17:45:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.472 17:45:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.472 17:45:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.472 17:45:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.472 17:45:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.472 17:45:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.472 17:45:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:51.472 17:45:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:08:51.472 17:45:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.472 17:45:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.472 17:45:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.472 17:45:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.472 17:45:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.472 17:45:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.472 17:45:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.472 17:45:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.472 17:45:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.472 17:45:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.472 17:45:55 -- paths/export.sh@5 -- # export PATH 00:08:51.472 17:45:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.472 17:45:55 -- nvmf/common.sh@46 -- # : 0 00:08:51.472 17:45:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:51.472 17:45:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:51.472 17:45:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:51.472 17:45:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.472 17:45:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.472 17:45:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:51.472 17:45:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:51.472 17:45:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:51.472 17:45:55 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:51.472 17:45:55 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:51.472 17:45:55 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:51.472 17:45:55 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:51.472 17:45:55 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:51.472 17:45:55 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:51.472 17:45:55 -- target/referrals.sh@37 -- # nvmftestinit 00:08:51.472 17:45:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:51.472 17:45:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.472 17:45:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:51.472 17:45:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:51.472 17:45:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:51.472 17:45:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.472 17:45:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.472 17:45:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.472 17:45:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:51.472 17:45:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:51.472 17:45:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:51.472 17:45:55 -- common/autotest_common.sh@10 -- # set +x 00:08:59.664 17:46:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:59.664 17:46:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:59.664 17:46:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:59.664 17:46:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:59.664 17:46:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:59.664 17:46:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:59.664 17:46:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:59.664 17:46:02 -- nvmf/common.sh@294 -- # net_devs=() 00:08:59.664 17:46:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:59.664 17:46:02 -- nvmf/common.sh@295 -- # e810=() 00:08:59.664 17:46:02 -- nvmf/common.sh@295 -- # local -ga e810 00:08:59.664 17:46:02 -- nvmf/common.sh@296 -- # x722=() 00:08:59.664 17:46:02 -- nvmf/common.sh@296 -- # local -ga x722 00:08:59.664 17:46:02 -- nvmf/common.sh@297 -- # mlx=() 00:08:59.664 17:46:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:59.664 17:46:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.664 17:46:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:59.664 17:46:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:59.664 17:46:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:59.664 17:46:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.664 17:46:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:59.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:59.664 17:46:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:59.664 17:46:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.665 17:46:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:59.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:59.665 17:46:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:59.665 17:46:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.665 17:46:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.665 17:46:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.665 17:46:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.665 17:46:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:59.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:59.665 17:46:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.665 17:46:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.665 17:46:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.665 17:46:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.665 17:46:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.665 17:46:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:59.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:59.665 17:46:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.665 17:46:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:59.665 17:46:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:59.665 17:46:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:59.665 17:46:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:59.665 17:46:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.665 17:46:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.665 17:46:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.665 17:46:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:59.665 17:46:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.665 17:46:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.665 17:46:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:59.665 17:46:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.665 17:46:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.665 17:46:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:59.665 17:46:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:59.665 17:46:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.665 17:46:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.665 17:46:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.665 17:46:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.665 17:46:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:59.665 17:46:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.665 17:46:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.665 17:46:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.665 17:46:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:59.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:08:59.665 00:08:59.665 --- 10.0.0.2 ping statistics --- 00:08:59.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.665 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:08:59.665 17:46:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:08:59.665 00:08:59.665 --- 10.0.0.1 ping statistics --- 00:08:59.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.665 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:59.665 17:46:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.665 17:46:03 -- nvmf/common.sh@410 -- # return 0 00:08:59.665 17:46:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:59.665 17:46:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.665 17:46:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:59.665 17:46:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:59.665 17:46:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.665 17:46:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:59.665 17:46:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:59.665 17:46:03 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:59.665 17:46:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.665 17:46:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:59.665 17:46:03 -- common/autotest_common.sh@10 -- # set +x 00:08:59.665 17:46:03 -- nvmf/common.sh@469 -- # nvmfpid=1529298 00:08:59.665 17:46:03 -- nvmf/common.sh@470 -- # waitforlisten 1529298 00:08:59.665 17:46:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.665 17:46:03 -- common/autotest_common.sh@819 -- # '[' -z 1529298 ']' 00:08:59.665 17:46:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.665 17:46:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.665 17:46:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.665 17:46:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.665 17:46:03 -- common/autotest_common.sh@10 -- # set +x 00:08:59.665 [2024-07-22 17:46:03.261425] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:59.665 [2024-07-22 17:46:03.261493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.665 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.665 [2024-07-22 17:46:03.354916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.665 [2024-07-22 17:46:03.446119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.665 [2024-07-22 17:46:03.446288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.665 [2024-07-22 17:46:03.446298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.665 [2024-07-22 17:46:03.446305] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.665 [2024-07-22 17:46:03.446459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.665 [2024-07-22 17:46:03.446498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.665 [2024-07-22 17:46:03.446611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.665 [2024-07-22 17:46:03.446613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.926 17:46:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.926 17:46:04 -- common/autotest_common.sh@852 -- # return 0 00:08:59.926 17:46:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:59.926 17:46:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:59.926 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 17:46:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.926 17:46:04 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.926 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.926 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 [2024-07-22 17:46:04.158570] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.926 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.926 17:46:04 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:59.926 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.926 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 [2024-07-22 17:46:04.172105] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:59.926 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.926 17:46:04 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:59.926 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.926 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.926 17:46:04 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:59.926 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.926 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:08:59.926 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.926 17:46:04 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:59.926 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.926 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.186 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.186 17:46:04 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.186 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.186 17:46:04 -- target/referrals.sh@48 -- # jq length 00:09:00.186 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.186 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.186 17:46:04 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:00.186 17:46:04 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:00.186 17:46:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:00.186 17:46:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.186 17:46:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:00.186 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.186 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.186 17:46:04 -- target/referrals.sh@21 -- # sort 00:09:00.186 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.186 17:46:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:00.186 17:46:04 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:00.186 17:46:04 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:00.186 17:46:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.186 17:46:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.186 17:46:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:00.186 17:46:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.186 17:46:04 -- target/referrals.sh@26 -- # sort 00:09:00.447 17:46:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:00.447 17:46:04 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:00.447 17:46:04 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:00.447 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.447 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.447 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.447 17:46:04 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:00.447 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.447 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.447 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.447 17:46:04 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:00.447 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.447 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.447 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.447 17:46:04 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.447 17:46:04 -- target/referrals.sh@56 -- # jq length 00:09:00.447 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.447 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.447 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.447 17:46:04 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:00.447 17:46:04 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:00.447 17:46:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.447 17:46:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.447 17:46:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:00.447 17:46:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.447 17:46:04 -- target/referrals.sh@26 -- # sort 00:09:00.447 17:46:04 -- target/referrals.sh@26 -- # echo 00:09:00.447 17:46:04 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:00.447 17:46:04 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:00.447 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.447 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.447 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.447 17:46:04 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:00.447 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.447 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.707 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.707 17:46:04 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:00.707 17:46:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:00.707 17:46:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.707 17:46:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:00.707 17:46:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.707 17:46:04 -- target/referrals.sh@21 -- # sort 00:09:00.707 17:46:04 -- common/autotest_common.sh@10 -- # set +x 00:09:00.707 17:46:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.707 17:46:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:00.707 17:46:04 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:00.707 17:46:04 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:00.707 17:46:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.707 17:46:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.707 17:46:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:00.707 17:46:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.707 17:46:04 -- target/referrals.sh@26 -- # sort 00:09:00.707 17:46:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:00.708 17:46:04 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:00.708 17:46:04 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:00.708 17:46:04 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:00.708 17:46:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:00.708 17:46:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:00.708 17:46:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:00.968 17:46:05 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:00.968 17:46:05 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:00.968 17:46:05 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:00.968 17:46:05 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:00.968 17:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:00.968 17:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:00.968 17:46:05 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:00.968 17:46:05 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:00.968 17:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.968 17:46:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.968 17:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.968 17:46:05 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:00.968 17:46:05 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:00.968 17:46:05 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:00.968 17:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.968 17:46:05 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:00.968 17:46:05 -- common/autotest_common.sh@10 -- # set +x 00:09:00.968 17:46:05 -- target/referrals.sh@21 -- # sort 00:09:00.968 17:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.968 17:46:05 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:00.968 17:46:05 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:00.969 17:46:05 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:00.969 17:46:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:00.969 17:46:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:00.969 17:46:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:00.969 17:46:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:00.969 17:46:05 -- target/referrals.sh@26 -- # sort 00:09:01.229 17:46:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:01.229 17:46:05 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:01.229 17:46:05 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:01.229 17:46:05 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:01.229 17:46:05 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:01.229 17:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:01.229 17:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:01.490 17:46:05 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:01.490 17:46:05 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:01.490 17:46:05 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:01.490 17:46:05 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:01.490 17:46:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:01.490 17:46:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:01.490 17:46:05 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:01.490 17:46:05 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:01.490 17:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.490 17:46:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.490 17:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.490 17:46:05 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:01.490 17:46:05 -- target/referrals.sh@82 -- # jq length 00:09:01.490 17:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.490 17:46:05 -- common/autotest_common.sh@10 -- # set +x 00:09:01.490 17:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.490 17:46:05 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:01.490 17:46:05 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:01.490 17:46:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:01.490 17:46:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:01.490 17:46:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:01.490 17:46:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:01.490 17:46:05 -- target/referrals.sh@26 -- # sort 00:09:01.751 17:46:05 -- target/referrals.sh@26 -- # echo 00:09:01.751 17:46:05 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:01.751 17:46:05 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:01.751 17:46:05 -- target/referrals.sh@86 -- # nvmftestfini 00:09:01.751 17:46:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:01.751 17:46:05 -- nvmf/common.sh@116 -- # sync 00:09:01.751 17:46:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:01.751 17:46:05 -- nvmf/common.sh@119 -- # set +e 00:09:01.751 17:46:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:01.751 17:46:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:01.751 rmmod nvme_tcp 00:09:01.751 rmmod nvme_fabrics 00:09:01.751 rmmod nvme_keyring 00:09:01.751 17:46:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:01.751 17:46:05 -- nvmf/common.sh@123 -- # set -e 00:09:01.751 17:46:05 -- nvmf/common.sh@124 -- # return 0 00:09:01.751 17:46:05 -- nvmf/common.sh@477 -- # '[' -n 1529298 ']' 00:09:01.751 17:46:05 -- nvmf/common.sh@478 -- # killprocess 1529298 00:09:01.751 17:46:05 -- common/autotest_common.sh@926 -- # '[' -z 1529298 ']' 00:09:01.751 17:46:05 -- common/autotest_common.sh@930 -- # kill -0 1529298 00:09:01.751 17:46:05 -- common/autotest_common.sh@931 -- # uname 00:09:01.751 17:46:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:01.751 17:46:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1529298 00:09:01.751 17:46:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:01.751 17:46:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:01.751 17:46:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1529298' 00:09:01.751 killing process with pid 1529298 00:09:01.751 17:46:05 -- common/autotest_common.sh@945 -- # kill 1529298 00:09:01.751 17:46:05 -- common/autotest_common.sh@950 -- # wait 1529298 00:09:02.012 17:46:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:02.012 17:46:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:02.012 17:46:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:02.012 17:46:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.012 17:46:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:02.012 17:46:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.012 17:46:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.012 17:46:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.924 17:46:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:03.924 00:09:03.924 real 0m12.876s 00:09:03.925 user 0m13.445s 00:09:03.925 sys 0m6.445s 00:09:03.925 17:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.925 17:46:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.925 ************************************ 00:09:03.925 END TEST nvmf_referrals 00:09:03.925 ************************************ 00:09:03.925 17:46:08 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:03.925 17:46:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:03.925 17:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.925 17:46:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.925 ************************************ 00:09:03.925 START TEST nvmf_connect_disconnect 00:09:03.925 ************************************ 00:09:03.925 17:46:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:04.186 * Looking for test storage... 00:09:04.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.186 17:46:08 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.186 17:46:08 -- nvmf/common.sh@7 -- # uname -s 00:09:04.186 17:46:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.186 17:46:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.186 17:46:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.186 17:46:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.186 17:46:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.186 17:46:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.186 17:46:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.186 17:46:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.186 17:46:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.186 17:46:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.186 17:46:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:04.186 17:46:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:09:04.186 17:46:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.186 17:46:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.186 17:46:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.186 17:46:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.186 17:46:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.186 17:46:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.186 17:46:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.186 17:46:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.186 17:46:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.186 17:46:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.186 17:46:08 -- paths/export.sh@5 -- # export PATH 00:09:04.186 17:46:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.186 17:46:08 -- nvmf/common.sh@46 -- # : 0 00:09:04.186 17:46:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:04.186 17:46:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:04.186 17:46:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:04.186 17:46:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.186 17:46:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.186 17:46:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:04.186 17:46:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:04.186 17:46:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:04.186 17:46:08 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:04.186 17:46:08 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:04.187 17:46:08 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:04.187 17:46:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:04.187 17:46:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.187 17:46:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:04.187 17:46:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:04.187 17:46:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:04.187 17:46:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.187 17:46:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.187 17:46:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.187 17:46:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:04.187 17:46:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:04.187 17:46:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:04.187 17:46:08 -- common/autotest_common.sh@10 -- # set +x 00:09:12.329 17:46:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:12.329 17:46:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:12.329 17:46:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:12.329 17:46:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:12.329 17:46:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:12.329 17:46:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:12.329 17:46:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:12.329 17:46:16 -- nvmf/common.sh@294 -- # net_devs=() 00:09:12.329 17:46:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:12.329 17:46:16 -- nvmf/common.sh@295 -- # e810=() 00:09:12.329 17:46:16 -- nvmf/common.sh@295 -- # local -ga e810 00:09:12.329 17:46:16 -- nvmf/common.sh@296 -- # x722=() 00:09:12.329 17:46:16 -- nvmf/common.sh@296 -- # local -ga x722 00:09:12.329 17:46:16 -- nvmf/common.sh@297 -- # mlx=() 00:09:12.329 17:46:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:12.329 17:46:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.329 17:46:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:12.329 17:46:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:12.329 17:46:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:12.329 17:46:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:12.329 17:46:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:12.329 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:12.329 17:46:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:12.329 17:46:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:12.329 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:12.329 17:46:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:12.329 17:46:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:12.329 17:46:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.329 17:46:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:12.329 17:46:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.329 17:46:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:12.329 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:12.329 17:46:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.329 17:46:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:12.329 17:46:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.329 17:46:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:12.329 17:46:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.329 17:46:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:12.329 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:12.329 17:46:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.329 17:46:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:12.329 17:46:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:12.329 17:46:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:12.329 17:46:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.329 17:46:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.329 17:46:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.329 17:46:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:12.329 17:46:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.329 17:46:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.329 17:46:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:12.329 17:46:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.329 17:46:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.329 17:46:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:12.329 17:46:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:12.329 17:46:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.329 17:46:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.329 17:46:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.329 17:46:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.329 17:46:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:12.329 17:46:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.329 17:46:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.329 17:46:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.329 17:46:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:12.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:09:12.329 00:09:12.329 --- 10.0.0.2 ping statistics --- 00:09:12.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.329 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:09:12.329 17:46:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:09:12.329 00:09:12.329 --- 10.0.0.1 ping statistics --- 00:09:12.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.329 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:09:12.329 17:46:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.329 17:46:16 -- nvmf/common.sh@410 -- # return 0 00:09:12.329 17:46:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:12.329 17:46:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.329 17:46:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:12.329 17:46:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.329 17:46:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:12.329 17:46:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:12.329 17:46:16 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:12.329 17:46:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:12.330 17:46:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:12.330 17:46:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.330 17:46:16 -- nvmf/common.sh@469 -- # nvmfpid=1534245 00:09:12.330 17:46:16 -- nvmf/common.sh@470 -- # waitforlisten 1534245 00:09:12.330 17:46:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.330 17:46:16 -- common/autotest_common.sh@819 -- # '[' -z 1534245 ']' 00:09:12.330 17:46:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.330 17:46:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:12.330 17:46:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.330 17:46:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:12.330 17:46:16 -- common/autotest_common.sh@10 -- # set +x 00:09:12.330 [2024-07-22 17:46:16.420968] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:12.330 [2024-07-22 17:46:16.421026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.330 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.330 [2024-07-22 17:46:16.494822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.330 [2024-07-22 17:46:16.564860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:12.330 [2024-07-22 17:46:16.564995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.330 [2024-07-22 17:46:16.565004] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.330 [2024-07-22 17:46:16.565012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.330 [2024-07-22 17:46:16.565153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.330 [2024-07-22 17:46:16.565258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.330 [2024-07-22 17:46:16.566384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.330 [2024-07-22 17:46:16.566387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.271 17:46:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.271 17:46:17 -- common/autotest_common.sh@852 -- # return 0 00:09:13.271 17:46:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:13.271 17:46:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:13.271 17:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.271 17:46:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.271 17:46:17 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:13.271 17:46:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.271 17:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.271 [2024-07-22 17:46:17.318740] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.271 17:46:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.271 17:46:17 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:13.271 17:46:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.271 17:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.271 17:46:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.271 17:46:17 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:13.272 17:46:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.272 17:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.272 17:46:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.272 17:46:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.272 17:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.272 17:46:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.272 17:46:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.272 17:46:17 -- common/autotest_common.sh@10 -- # set +x 00:09:13.272 [2024-07-22 17:46:17.374818] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.272 17:46:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:13.272 17:46:17 -- target/connect_disconnect.sh@34 -- # set +x 00:09:15.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.377 17:50:05 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:01.377 17:50:05 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:01.377 17:50:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:01.377 17:50:05 -- nvmf/common.sh@116 -- # sync 00:13:01.377 17:50:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:01.377 17:50:05 -- nvmf/common.sh@119 -- # set +e 00:13:01.377 17:50:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:01.377 17:50:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:01.377 rmmod nvme_tcp 00:13:01.377 rmmod nvme_fabrics 00:13:01.377 rmmod nvme_keyring 00:13:01.377 17:50:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:01.377 17:50:05 -- nvmf/common.sh@123 -- # set -e 00:13:01.377 17:50:05 -- nvmf/common.sh@124 -- # return 0 00:13:01.377 17:50:05 -- nvmf/common.sh@477 -- # '[' -n 1534245 ']' 00:13:01.377 17:50:05 -- nvmf/common.sh@478 -- # killprocess 1534245 00:13:01.377 17:50:05 -- common/autotest_common.sh@926 -- # '[' -z 1534245 ']' 00:13:01.377 17:50:05 -- common/autotest_common.sh@930 -- # kill -0 1534245 00:13:01.377 17:50:05 -- common/autotest_common.sh@931 -- # uname 00:13:01.377 17:50:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:01.377 17:50:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1534245 00:13:01.377 17:50:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:01.377 17:50:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:01.377 17:50:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1534245' 00:13:01.377 killing process with pid 1534245 00:13:01.377 17:50:05 -- common/autotest_common.sh@945 -- # kill 1534245 00:13:01.377 17:50:05 -- common/autotest_common.sh@950 -- # wait 1534245 00:13:01.377 17:50:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:01.377 17:50:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:01.377 17:50:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:01.377 17:50:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.377 17:50:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:01.377 17:50:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.377 17:50:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.377 17:50:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.291 17:50:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:03.291 00:13:03.291 real 3m59.340s 00:13:03.291 user 15m9.949s 00:13:03.291 sys 0m19.447s 00:13:03.291 17:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.291 17:50:07 -- common/autotest_common.sh@10 -- # set +x 00:13:03.291 ************************************ 00:13:03.291 END TEST nvmf_connect_disconnect 00:13:03.291 ************************************ 00:13:03.291 17:50:07 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:03.291 17:50:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:03.291 17:50:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.291 17:50:07 -- common/autotest_common.sh@10 -- # set +x 00:13:03.291 ************************************ 00:13:03.291 START TEST nvmf_multitarget 00:13:03.291 ************************************ 00:13:03.291 17:50:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:03.551 * Looking for test storage... 00:13:03.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.551 17:50:07 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.551 17:50:07 -- nvmf/common.sh@7 -- # uname -s 00:13:03.551 17:50:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.551 17:50:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.551 17:50:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.551 17:50:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.551 17:50:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.551 17:50:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.551 17:50:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.551 17:50:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.551 17:50:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.551 17:50:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.551 17:50:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:03.551 17:50:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:03.551 17:50:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.551 17:50:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.551 17:50:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.551 17:50:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.551 17:50:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.551 17:50:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.551 17:50:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.551 17:50:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.551 17:50:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.552 17:50:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.552 17:50:07 -- paths/export.sh@5 -- # export PATH 00:13:03.552 17:50:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.552 17:50:07 -- nvmf/common.sh@46 -- # : 0 00:13:03.552 17:50:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:03.552 17:50:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:03.552 17:50:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:03.552 17:50:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.552 17:50:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.552 17:50:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:03.552 17:50:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:03.552 17:50:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:03.552 17:50:07 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.552 17:50:07 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:03.552 17:50:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:03.552 17:50:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.552 17:50:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:03.552 17:50:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:03.552 17:50:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:03.552 17:50:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.552 17:50:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.552 17:50:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.552 17:50:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:03.552 17:50:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:03.552 17:50:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:03.552 17:50:07 -- common/autotest_common.sh@10 -- # set +x 00:13:11.709 17:50:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:11.709 17:50:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:11.709 17:50:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:11.709 17:50:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:11.709 17:50:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:11.709 17:50:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:11.709 17:50:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:11.709 17:50:15 -- nvmf/common.sh@294 -- # net_devs=() 00:13:11.709 17:50:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:11.709 17:50:15 -- nvmf/common.sh@295 -- # e810=() 00:13:11.709 17:50:15 -- nvmf/common.sh@295 -- # local -ga e810 00:13:11.709 17:50:15 -- nvmf/common.sh@296 -- # x722=() 00:13:11.709 17:50:15 -- nvmf/common.sh@296 -- # local -ga x722 00:13:11.709 17:50:15 -- nvmf/common.sh@297 -- # mlx=() 00:13:11.709 17:50:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:11.709 17:50:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.709 17:50:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:11.709 17:50:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:11.709 17:50:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:11.709 17:50:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:11.709 17:50:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:11.709 17:50:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:11.709 17:50:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:11.709 17:50:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:11.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:11.709 17:50:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:11.710 17:50:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:11.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:11.710 17:50:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:11.710 17:50:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:11.710 17:50:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.710 17:50:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:11.710 17:50:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.710 17:50:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:11.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:11.710 17:50:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.710 17:50:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:11.710 17:50:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.710 17:50:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:11.710 17:50:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.710 17:50:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:11.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:11.710 17:50:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.710 17:50:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:11.710 17:50:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:11.710 17:50:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:11.710 17:50:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.710 17:50:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.710 17:50:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.710 17:50:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:11.710 17:50:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.710 17:50:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.710 17:50:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:11.710 17:50:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.710 17:50:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.710 17:50:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:11.710 17:50:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:11.710 17:50:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.710 17:50:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.710 17:50:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.710 17:50:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.710 17:50:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:11.710 17:50:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.710 17:50:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.710 17:50:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.710 17:50:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:11.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:13:11.710 00:13:11.710 --- 10.0.0.2 ping statistics --- 00:13:11.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.710 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:13:11.710 17:50:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:13:11.710 00:13:11.710 --- 10.0.0.1 ping statistics --- 00:13:11.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.710 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:13:11.710 17:50:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.710 17:50:15 -- nvmf/common.sh@410 -- # return 0 00:13:11.710 17:50:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:11.710 17:50:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.710 17:50:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:11.710 17:50:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.710 17:50:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:11.710 17:50:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:11.710 17:50:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:11.710 17:50:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:11.710 17:50:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:11.710 17:50:15 -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 17:50:15 -- nvmf/common.sh@469 -- # nvmfpid=1578854 00:13:11.710 17:50:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.710 17:50:15 -- nvmf/common.sh@470 -- # waitforlisten 1578854 00:13:11.710 17:50:15 -- common/autotest_common.sh@819 -- # '[' -z 1578854 ']' 00:13:11.710 17:50:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.710 17:50:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:11.710 17:50:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.710 17:50:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:11.710 17:50:15 -- common/autotest_common.sh@10 -- # set +x 00:13:11.710 [2024-07-22 17:50:15.724891] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:11.710 [2024-07-22 17:50:15.724952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.710 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.710 [2024-07-22 17:50:15.818158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.710 [2024-07-22 17:50:15.909821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:11.710 [2024-07-22 17:50:15.909978] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.710 [2024-07-22 17:50:15.909987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.710 [2024-07-22 17:50:15.909994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.710 [2024-07-22 17:50:15.910130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.710 [2024-07-22 17:50:15.910256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.710 [2024-07-22 17:50:15.910391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.710 [2024-07-22 17:50:15.910458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.651 17:50:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:12.651 17:50:16 -- common/autotest_common.sh@852 -- # return 0 00:13:12.651 17:50:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:12.651 17:50:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:12.651 17:50:16 -- common/autotest_common.sh@10 -- # set +x 00:13:12.651 17:50:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.651 17:50:16 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:12.651 17:50:16 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.651 17:50:16 -- target/multitarget.sh@21 -- # jq length 00:13:12.651 17:50:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:12.651 17:50:16 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:12.651 "nvmf_tgt_1" 00:13:12.651 17:50:16 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:12.651 "nvmf_tgt_2" 00:13:12.911 17:50:16 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.911 17:50:16 -- target/multitarget.sh@28 -- # jq length 00:13:12.911 17:50:17 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:12.911 17:50:17 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:12.912 true 00:13:12.912 17:50:17 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:13.173 true 00:13:13.173 17:50:17 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:13.173 17:50:17 -- target/multitarget.sh@35 -- # jq length 00:13:13.173 17:50:17 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:13.173 17:50:17 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:13.173 17:50:17 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:13.173 17:50:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:13.173 17:50:17 -- nvmf/common.sh@116 -- # sync 00:13:13.173 17:50:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:13.173 17:50:17 -- nvmf/common.sh@119 -- # set +e 00:13:13.173 17:50:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:13.173 17:50:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:13.173 rmmod nvme_tcp 00:13:13.173 rmmod nvme_fabrics 00:13:13.173 rmmod nvme_keyring 00:13:13.173 17:50:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:13.173 17:50:17 -- nvmf/common.sh@123 -- # set -e 00:13:13.173 17:50:17 -- nvmf/common.sh@124 -- # return 0 00:13:13.173 17:50:17 -- nvmf/common.sh@477 -- # '[' -n 1578854 ']' 00:13:13.173 17:50:17 -- nvmf/common.sh@478 -- # killprocess 1578854 00:13:13.173 17:50:17 -- common/autotest_common.sh@926 -- # '[' -z 1578854 ']' 00:13:13.173 17:50:17 -- common/autotest_common.sh@930 -- # kill -0 1578854 00:13:13.173 17:50:17 -- common/autotest_common.sh@931 -- # uname 00:13:13.173 17:50:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:13.173 17:50:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1578854 00:13:13.433 17:50:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:13.433 17:50:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:13.433 17:50:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1578854' 00:13:13.433 killing process with pid 1578854 00:13:13.433 17:50:17 -- common/autotest_common.sh@945 -- # kill 1578854 00:13:13.433 17:50:17 -- common/autotest_common.sh@950 -- # wait 1578854 00:13:13.433 17:50:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:13.433 17:50:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:13.433 17:50:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:13.433 17:50:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.433 17:50:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:13.433 17:50:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.433 17:50:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.433 17:50:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.978 17:50:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:15.978 00:13:15.978 real 0m12.126s 00:13:15.978 user 0m9.823s 00:13:15.978 sys 0m6.432s 00:13:15.978 17:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.978 17:50:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.978 ************************************ 00:13:15.978 END TEST nvmf_multitarget 00:13:15.978 ************************************ 00:13:15.979 17:50:19 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.979 17:50:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:15.979 17:50:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.979 17:50:19 -- common/autotest_common.sh@10 -- # set +x 00:13:15.979 ************************************ 00:13:15.979 START TEST nvmf_rpc 00:13:15.979 ************************************ 00:13:15.979 17:50:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.979 * Looking for test storage... 00:13:15.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.979 17:50:19 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.979 17:50:19 -- nvmf/common.sh@7 -- # uname -s 00:13:15.979 17:50:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.979 17:50:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.979 17:50:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.979 17:50:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.979 17:50:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.979 17:50:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.979 17:50:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.979 17:50:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.979 17:50:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.979 17:50:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.979 17:50:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:15.979 17:50:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:15.979 17:50:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.979 17:50:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.979 17:50:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.979 17:50:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.979 17:50:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.979 17:50:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.979 17:50:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.979 17:50:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.979 17:50:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.979 17:50:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.979 17:50:19 -- paths/export.sh@5 -- # export PATH 00:13:15.979 17:50:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.979 17:50:19 -- nvmf/common.sh@46 -- # : 0 00:13:15.979 17:50:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:15.979 17:50:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:15.979 17:50:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:15.979 17:50:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.979 17:50:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.979 17:50:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:15.979 17:50:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:15.979 17:50:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:15.979 17:50:19 -- target/rpc.sh@11 -- # loops=5 00:13:15.979 17:50:19 -- target/rpc.sh@23 -- # nvmftestinit 00:13:15.979 17:50:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:15.979 17:50:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.979 17:50:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:15.979 17:50:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:15.979 17:50:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:15.979 17:50:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.979 17:50:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.979 17:50:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.979 17:50:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:15.979 17:50:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:15.979 17:50:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:15.979 17:50:19 -- common/autotest_common.sh@10 -- # set +x 00:13:24.235 17:50:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:24.235 17:50:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:24.235 17:50:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:24.235 17:50:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:24.235 17:50:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:24.235 17:50:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:24.235 17:50:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:24.235 17:50:27 -- nvmf/common.sh@294 -- # net_devs=() 00:13:24.235 17:50:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:24.235 17:50:27 -- nvmf/common.sh@295 -- # e810=() 00:13:24.235 17:50:27 -- nvmf/common.sh@295 -- # local -ga e810 00:13:24.235 17:50:27 -- nvmf/common.sh@296 -- # x722=() 00:13:24.235 17:50:27 -- nvmf/common.sh@296 -- # local -ga x722 00:13:24.235 17:50:27 -- nvmf/common.sh@297 -- # mlx=() 00:13:24.235 17:50:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:24.235 17:50:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.235 17:50:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:24.235 17:50:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:24.235 17:50:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:24.235 17:50:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:24.235 17:50:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:24.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:24.235 17:50:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:24.235 17:50:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:24.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:24.235 17:50:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:24.235 17:50:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:24.235 17:50:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.235 17:50:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:24.235 17:50:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.235 17:50:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:24.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:24.235 17:50:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.235 17:50:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:24.235 17:50:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.235 17:50:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:24.235 17:50:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.235 17:50:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:24.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:24.235 17:50:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.235 17:50:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:24.235 17:50:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:24.235 17:50:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:24.235 17:50:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:24.235 17:50:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.235 17:50:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.235 17:50:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.235 17:50:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:24.235 17:50:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.235 17:50:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.235 17:50:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:24.235 17:50:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.235 17:50:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.235 17:50:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:24.235 17:50:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:24.235 17:50:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.235 17:50:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.235 17:50:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.235 17:50:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.235 17:50:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:24.235 17:50:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.235 17:50:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.235 17:50:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.235 17:50:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:24.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:13:24.235 00:13:24.235 --- 10.0.0.2 ping statistics --- 00:13:24.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.235 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:13:24.235 17:50:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:13:24.235 00:13:24.235 --- 10.0.0.1 ping statistics --- 00:13:24.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.236 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:13:24.236 17:50:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.236 17:50:27 -- nvmf/common.sh@410 -- # return 0 00:13:24.236 17:50:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:24.236 17:50:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.236 17:50:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:24.236 17:50:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:24.236 17:50:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.236 17:50:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:24.236 17:50:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:24.236 17:50:27 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:24.236 17:50:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:24.236 17:50:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:24.236 17:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:24.236 17:50:27 -- nvmf/common.sh@469 -- # nvmfpid=1583676 00:13:24.236 17:50:27 -- nvmf/common.sh@470 -- # waitforlisten 1583676 00:13:24.236 17:50:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.236 17:50:27 -- common/autotest_common.sh@819 -- # '[' -z 1583676 ']' 00:13:24.236 17:50:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.236 17:50:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:24.236 17:50:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.236 17:50:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:24.236 17:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:24.236 [2024-07-22 17:50:27.890492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:24.236 [2024-07-22 17:50:27.890554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.236 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.236 [2024-07-22 17:50:27.983955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.236 [2024-07-22 17:50:28.073716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:24.236 [2024-07-22 17:50:28.073887] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.236 [2024-07-22 17:50:28.073897] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.236 [2024-07-22 17:50:28.073904] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.236 [2024-07-22 17:50:28.074048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.236 [2024-07-22 17:50:28.074171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.236 [2024-07-22 17:50:28.074303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.236 [2024-07-22 17:50:28.074306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.496 17:50:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:24.496 17:50:28 -- common/autotest_common.sh@852 -- # return 0 00:13:24.496 17:50:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:24.496 17:50:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:24.496 17:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.496 17:50:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.496 17:50:28 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:24.496 17:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.496 17:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.496 17:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.496 17:50:28 -- target/rpc.sh@26 -- # stats='{ 00:13:24.496 "tick_rate": 2600000000, 00:13:24.496 "poll_groups": [ 00:13:24.496 { 00:13:24.496 "name": "nvmf_tgt_poll_group_0", 00:13:24.496 "admin_qpairs": 0, 00:13:24.496 "io_qpairs": 0, 00:13:24.496 "current_admin_qpairs": 0, 00:13:24.496 "current_io_qpairs": 0, 00:13:24.496 "pending_bdev_io": 0, 00:13:24.496 "completed_nvme_io": 0, 00:13:24.496 "transports": [] 00:13:24.496 }, 00:13:24.496 { 00:13:24.496 "name": "nvmf_tgt_poll_group_1", 00:13:24.496 "admin_qpairs": 0, 00:13:24.496 "io_qpairs": 0, 00:13:24.496 "current_admin_qpairs": 0, 00:13:24.496 "current_io_qpairs": 0, 00:13:24.496 "pending_bdev_io": 0, 00:13:24.496 "completed_nvme_io": 0, 00:13:24.496 "transports": [] 00:13:24.496 }, 00:13:24.496 { 00:13:24.496 "name": "nvmf_tgt_poll_group_2", 00:13:24.496 "admin_qpairs": 0, 00:13:24.496 "io_qpairs": 0, 00:13:24.496 "current_admin_qpairs": 0, 00:13:24.496 "current_io_qpairs": 0, 00:13:24.496 "pending_bdev_io": 0, 00:13:24.496 "completed_nvme_io": 0, 00:13:24.496 "transports": [] 00:13:24.496 }, 00:13:24.496 { 00:13:24.496 "name": "nvmf_tgt_poll_group_3", 00:13:24.496 "admin_qpairs": 0, 00:13:24.496 "io_qpairs": 0, 00:13:24.496 "current_admin_qpairs": 0, 00:13:24.496 "current_io_qpairs": 0, 00:13:24.496 "pending_bdev_io": 0, 00:13:24.496 "completed_nvme_io": 0, 00:13:24.496 "transports": [] 00:13:24.496 } 00:13:24.497 ] 00:13:24.497 }' 00:13:24.497 17:50:28 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:24.497 17:50:28 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:24.757 17:50:28 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:24.757 17:50:28 -- target/rpc.sh@15 -- # wc -l 00:13:24.757 17:50:28 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:24.757 17:50:28 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:24.757 17:50:28 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:24.757 17:50:28 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.757 17:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.757 17:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 [2024-07-22 17:50:28.867835] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.757 17:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.757 17:50:28 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:24.757 17:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.757 17:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 17:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.757 17:50:28 -- target/rpc.sh@33 -- # stats='{ 00:13:24.757 "tick_rate": 2600000000, 00:13:24.757 "poll_groups": [ 00:13:24.757 { 00:13:24.757 "name": "nvmf_tgt_poll_group_0", 00:13:24.757 "admin_qpairs": 0, 00:13:24.757 "io_qpairs": 0, 00:13:24.757 "current_admin_qpairs": 0, 00:13:24.757 "current_io_qpairs": 0, 00:13:24.757 "pending_bdev_io": 0, 00:13:24.757 "completed_nvme_io": 0, 00:13:24.757 "transports": [ 00:13:24.757 { 00:13:24.757 "trtype": "TCP" 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "name": "nvmf_tgt_poll_group_1", 00:13:24.757 "admin_qpairs": 0, 00:13:24.757 "io_qpairs": 0, 00:13:24.757 "current_admin_qpairs": 0, 00:13:24.757 "current_io_qpairs": 0, 00:13:24.757 "pending_bdev_io": 0, 00:13:24.757 "completed_nvme_io": 0, 00:13:24.757 "transports": [ 00:13:24.757 { 00:13:24.757 "trtype": "TCP" 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "name": "nvmf_tgt_poll_group_2", 00:13:24.757 "admin_qpairs": 0, 00:13:24.757 "io_qpairs": 0, 00:13:24.757 "current_admin_qpairs": 0, 00:13:24.757 "current_io_qpairs": 0, 00:13:24.757 "pending_bdev_io": 0, 00:13:24.757 "completed_nvme_io": 0, 00:13:24.757 "transports": [ 00:13:24.757 { 00:13:24.757 "trtype": "TCP" 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 }, 00:13:24.757 { 00:13:24.757 "name": "nvmf_tgt_poll_group_3", 00:13:24.757 "admin_qpairs": 0, 00:13:24.757 "io_qpairs": 0, 00:13:24.757 "current_admin_qpairs": 0, 00:13:24.757 "current_io_qpairs": 0, 00:13:24.757 "pending_bdev_io": 0, 00:13:24.757 "completed_nvme_io": 0, 00:13:24.757 "transports": [ 00:13:24.757 { 00:13:24.757 "trtype": "TCP" 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 } 00:13:24.757 ] 00:13:24.757 }' 00:13:24.757 17:50:28 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:24.757 17:50:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:24.757 17:50:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:24.757 17:50:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.757 17:50:28 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:24.757 17:50:28 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:24.757 17:50:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:24.757 17:50:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:24.757 17:50:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.757 17:50:28 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:24.757 17:50:28 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:24.757 17:50:28 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:24.757 17:50:28 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:24.757 17:50:28 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:24.757 17:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.757 17:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 Malloc1 00:13:24.757 17:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.757 17:50:29 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.757 17:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.757 17:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:24.757 17:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.757 17:50:29 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.757 17:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.757 17:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:25.017 17:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.017 17:50:29 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:25.017 17:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.017 17:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:25.017 17:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.017 17:50:29 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.017 17:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.017 17:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:25.017 [2024-07-22 17:50:29.052529] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.017 17:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.017 17:50:29 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:25.017 17:50:29 -- common/autotest_common.sh@640 -- # local es=0 00:13:25.017 17:50:29 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:25.017 17:50:29 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:25.017 17:50:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:25.017 17:50:29 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:25.017 17:50:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:25.017 17:50:29 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:25.017 17:50:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:25.017 17:50:29 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:25.017 17:50:29 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:25.017 17:50:29 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.2 -s 4420 00:13:25.018 [2024-07-22 17:50:29.079678] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:13:25.018 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:25.018 could not add new controller: failed to write to nvme-fabrics device 00:13:25.018 17:50:29 -- common/autotest_common.sh@643 -- # es=1 00:13:25.018 17:50:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:25.018 17:50:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:25.018 17:50:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:25.018 17:50:29 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:25.018 17:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:25.018 17:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:25.018 17:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:25.018 17:50:29 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.397 17:50:30 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.397 17:50:30 -- common/autotest_common.sh@1177 -- # local i=0 00:13:26.397 17:50:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.397 17:50:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:26.397 17:50:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:28.937 17:50:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:28.937 17:50:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:28.937 17:50:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.937 17:50:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:28.937 17:50:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.937 17:50:32 -- common/autotest_common.sh@1187 -- # return 0 00:13:28.937 17:50:32 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.937 17:50:32 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.937 17:50:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:28.937 17:50:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:28.937 17:50:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.937 17:50:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:28.937 17:50:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.937 17:50:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:28.937 17:50:32 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:28.937 17:50:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.937 17:50:32 -- common/autotest_common.sh@10 -- # set +x 00:13:28.937 17:50:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.937 17:50:32 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.937 17:50:32 -- common/autotest_common.sh@640 -- # local es=0 00:13:28.937 17:50:32 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.937 17:50:32 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:28.937 17:50:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:28.938 17:50:32 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:28.938 17:50:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:28.938 17:50:32 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:28.938 17:50:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:28.938 17:50:32 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:28.938 17:50:32 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:28.938 17:50:32 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.938 [2024-07-22 17:50:32.852925] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a' 00:13:28.938 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:28.938 could not add new controller: failed to write to nvme-fabrics device 00:13:28.938 17:50:32 -- common/autotest_common.sh@643 -- # es=1 00:13:28.938 17:50:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:28.938 17:50:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:28.938 17:50:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:28.938 17:50:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:28.938 17:50:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.938 17:50:32 -- common/autotest_common.sh@10 -- # set +x 00:13:28.938 17:50:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.938 17:50:32 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.369 17:50:34 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.369 17:50:34 -- common/autotest_common.sh@1177 -- # local i=0 00:13:30.369 17:50:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.369 17:50:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:30.369 17:50:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:32.281 17:50:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:32.281 17:50:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:32.281 17:50:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.281 17:50:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:32.281 17:50:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.281 17:50:36 -- common/autotest_common.sh@1187 -- # return 0 00:13:32.281 17:50:36 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.281 17:50:36 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.281 17:50:36 -- common/autotest_common.sh@1198 -- # local i=0 00:13:32.281 17:50:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:32.281 17:50:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.281 17:50:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:32.281 17:50:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.281 17:50:36 -- common/autotest_common.sh@1210 -- # return 0 00:13:32.281 17:50:36 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.281 17:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.281 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.281 17:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.281 17:50:36 -- target/rpc.sh@81 -- # seq 1 5 00:13:32.281 17:50:36 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.281 17:50:36 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.281 17:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.281 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.281 17:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.281 17:50:36 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.281 17:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.281 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.281 [2024-07-22 17:50:36.532225] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.281 17:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.281 17:50:36 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.281 17:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.281 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.281 17:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.281 17:50:36 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.281 17:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.281 17:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.542 17:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.542 17:50:36 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.925 17:50:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.925 17:50:37 -- common/autotest_common.sh@1177 -- # local i=0 00:13:33.925 17:50:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.925 17:50:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:33.925 17:50:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:35.837 17:50:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:35.837 17:50:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:35.837 17:50:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.837 17:50:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:35.837 17:50:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.837 17:50:40 -- common/autotest_common.sh@1187 -- # return 0 00:13:35.837 17:50:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.837 17:50:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.837 17:50:40 -- common/autotest_common.sh@1198 -- # local i=0 00:13:35.837 17:50:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:36.098 17:50:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.098 17:50:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:36.098 17:50:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.098 17:50:40 -- common/autotest_common.sh@1210 -- # return 0 00:13:36.098 17:50:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.098 17:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.098 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 17:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.098 17:50:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.098 17:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.098 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 17:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.098 17:50:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.098 17:50:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.098 17:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.098 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 17:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.098 17:50:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.098 17:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.098 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 [2024-07-22 17:50:40.175387] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.098 17:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.098 17:50:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.098 17:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.098 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 17:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.098 17:50:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.098 17:50:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.098 17:50:40 -- common/autotest_common.sh@10 -- # set +x 00:13:36.098 17:50:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.098 17:50:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.480 17:50:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.480 17:50:41 -- common/autotest_common.sh@1177 -- # local i=0 00:13:37.480 17:50:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.480 17:50:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:37.480 17:50:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:39.394 17:50:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:39.394 17:50:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:39.394 17:50:43 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.394 17:50:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:39.394 17:50:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.394 17:50:43 -- common/autotest_common.sh@1187 -- # return 0 00:13:39.394 17:50:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.655 17:50:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.655 17:50:43 -- common/autotest_common.sh@1198 -- # local i=0 00:13:39.655 17:50:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.655 17:50:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:39.655 17:50:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:39.655 17:50:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.655 17:50:43 -- common/autotest_common.sh@1210 -- # return 0 00:13:39.655 17:50:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.655 17:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.655 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.655 17:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.655 17:50:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.655 17:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.655 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.655 17:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.655 17:50:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:39.655 17:50:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.655 17:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.655 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.655 17:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.655 17:50:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.655 17:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.655 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.655 [2024-07-22 17:50:43.828656] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.655 17:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.655 17:50:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:39.655 17:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.655 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.655 17:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.655 17:50:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.655 17:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.655 17:50:43 -- common/autotest_common.sh@10 -- # set +x 00:13:39.655 17:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.655 17:50:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.038 17:50:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.038 17:50:45 -- common/autotest_common.sh@1177 -- # local i=0 00:13:41.038 17:50:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.038 17:50:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:41.038 17:50:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:43.581 17:50:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:43.581 17:50:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:43.581 17:50:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.581 17:50:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:43.581 17:50:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.581 17:50:47 -- common/autotest_common.sh@1187 -- # return 0 00:13:43.581 17:50:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.581 17:50:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.581 17:50:47 -- common/autotest_common.sh@1198 -- # local i=0 00:13:43.581 17:50:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:43.581 17:50:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.581 17:50:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:43.581 17:50:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.581 17:50:47 -- common/autotest_common.sh@1210 -- # return 0 00:13:43.581 17:50:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.581 17:50:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.581 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 17:50:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.581 17:50:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.581 17:50:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.581 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 17:50:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.581 17:50:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.581 17:50:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.581 17:50:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.581 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 17:50:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.581 17:50:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.581 17:50:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.581 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 [2024-07-22 17:50:47.479937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.581 17:50:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.581 17:50:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.581 17:50:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.581 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 17:50:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.581 17:50:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.581 17:50:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.581 17:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 17:50:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.581 17:50:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.964 17:50:49 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.964 17:50:49 -- common/autotest_common.sh@1177 -- # local i=0 00:13:44.964 17:50:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.964 17:50:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:44.964 17:50:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:46.876 17:50:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:46.876 17:50:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:46.876 17:50:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.876 17:50:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:46.876 17:50:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.876 17:50:51 -- common/autotest_common.sh@1187 -- # return 0 00:13:46.876 17:50:51 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:46.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.876 17:50:51 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:46.876 17:50:51 -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.876 17:50:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:46.876 17:50:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.876 17:50:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:46.876 17:50:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:46.876 17:50:51 -- common/autotest_common.sh@1210 -- # return 0 00:13:46.876 17:50:51 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:46.876 17:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.876 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:46.876 17:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.876 17:50:51 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.876 17:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.876 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:46.877 17:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.877 17:50:51 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:46.877 17:50:51 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:46.877 17:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.877 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.137 17:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.137 17:50:51 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.137 17:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.137 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.137 [2024-07-22 17:50:51.168046] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.137 17:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.137 17:50:51 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.137 17:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.137 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.137 17:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.137 17:50:51 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.137 17:50:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.137 17:50:51 -- common/autotest_common.sh@10 -- # set +x 00:13:47.137 17:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.137 17:50:51 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.521 17:50:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.521 17:50:52 -- common/autotest_common.sh@1177 -- # local i=0 00:13:48.521 17:50:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.521 17:50:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:48.522 17:50:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:50.437 17:50:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:50.437 17:50:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:50.437 17:50:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.437 17:50:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:50.437 17:50:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.437 17:50:54 -- common/autotest_common.sh@1187 -- # return 0 00:13:50.437 17:50:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.698 17:50:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.698 17:50:54 -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.698 17:50:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:50.698 17:50:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.698 17:50:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:50.698 17:50:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.698 17:50:54 -- common/autotest_common.sh@1210 -- # return 0 00:13:50.698 17:50:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@99 -- # seq 1 5 00:13:50.698 17:50:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.698 17:50:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 [2024-07-22 17:50:54.831524] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.698 17:50:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.698 17:50:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.698 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.698 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.698 [2024-07-22 17:50:54.887663] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.699 17:50:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 [2024-07-22 17:50:54.947849] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.699 17:50:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.699 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.699 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.960 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.960 17:50:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.960 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.960 17:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:54 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 [2024-07-22 17:50:55.004032] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.960 17:50:55 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 [2024-07-22 17:50:55.060252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:50.960 17:50:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.960 17:50:55 -- common/autotest_common.sh@10 -- # set +x 00:13:50.960 17:50:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.960 17:50:55 -- target/rpc.sh@110 -- # stats='{ 00:13:50.960 "tick_rate": 2600000000, 00:13:50.960 "poll_groups": [ 00:13:50.960 { 00:13:50.960 "name": "nvmf_tgt_poll_group_0", 00:13:50.960 "admin_qpairs": 0, 00:13:50.960 "io_qpairs": 224, 00:13:50.960 "current_admin_qpairs": 0, 00:13:50.960 "current_io_qpairs": 0, 00:13:50.960 "pending_bdev_io": 0, 00:13:50.960 "completed_nvme_io": 227, 00:13:50.960 "transports": [ 00:13:50.960 { 00:13:50.960 "trtype": "TCP" 00:13:50.960 } 00:13:50.960 ] 00:13:50.960 }, 00:13:50.960 { 00:13:50.960 "name": "nvmf_tgt_poll_group_1", 00:13:50.960 "admin_qpairs": 1, 00:13:50.960 "io_qpairs": 223, 00:13:50.960 "current_admin_qpairs": 0, 00:13:50.960 "current_io_qpairs": 0, 00:13:50.960 "pending_bdev_io": 0, 00:13:50.960 "completed_nvme_io": 224, 00:13:50.960 "transports": [ 00:13:50.960 { 00:13:50.960 "trtype": "TCP" 00:13:50.960 } 00:13:50.960 ] 00:13:50.960 }, 00:13:50.960 { 00:13:50.960 "name": "nvmf_tgt_poll_group_2", 00:13:50.960 "admin_qpairs": 6, 00:13:50.960 "io_qpairs": 218, 00:13:50.960 "current_admin_qpairs": 0, 00:13:50.960 "current_io_qpairs": 0, 00:13:50.960 "pending_bdev_io": 0, 00:13:50.960 "completed_nvme_io": 267, 00:13:50.960 "transports": [ 00:13:50.960 { 00:13:50.960 "trtype": "TCP" 00:13:50.960 } 00:13:50.960 ] 00:13:50.960 }, 00:13:50.960 { 00:13:50.960 "name": "nvmf_tgt_poll_group_3", 00:13:50.960 "admin_qpairs": 0, 00:13:50.960 "io_qpairs": 224, 00:13:50.960 "current_admin_qpairs": 0, 00:13:50.960 "current_io_qpairs": 0, 00:13:50.960 "pending_bdev_io": 0, 00:13:50.960 "completed_nvme_io": 521, 00:13:50.960 "transports": [ 00:13:50.960 { 00:13:50.960 "trtype": "TCP" 00:13:50.961 } 00:13:50.961 ] 00:13:50.961 } 00:13:50.961 ] 00:13:50.961 }' 00:13:50.961 17:50:55 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:50.961 17:50:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:50.961 17:50:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:50.961 17:50:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:50.961 17:50:55 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:50.961 17:50:55 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:50.961 17:50:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:50.961 17:50:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:50.961 17:50:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:50.961 17:50:55 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:50.961 17:50:55 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:50.961 17:50:55 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:50.961 17:50:55 -- target/rpc.sh@123 -- # nvmftestfini 00:13:50.961 17:50:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.961 17:50:55 -- nvmf/common.sh@116 -- # sync 00:13:50.961 17:50:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.961 17:50:55 -- nvmf/common.sh@119 -- # set +e 00:13:50.961 17:50:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.961 17:50:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.961 rmmod nvme_tcp 00:13:51.221 rmmod nvme_fabrics 00:13:51.221 rmmod nvme_keyring 00:13:51.221 17:50:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:51.221 17:50:55 -- nvmf/common.sh@123 -- # set -e 00:13:51.221 17:50:55 -- nvmf/common.sh@124 -- # return 0 00:13:51.221 17:50:55 -- nvmf/common.sh@477 -- # '[' -n 1583676 ']' 00:13:51.221 17:50:55 -- nvmf/common.sh@478 -- # killprocess 1583676 00:13:51.221 17:50:55 -- common/autotest_common.sh@926 -- # '[' -z 1583676 ']' 00:13:51.221 17:50:55 -- common/autotest_common.sh@930 -- # kill -0 1583676 00:13:51.221 17:50:55 -- common/autotest_common.sh@931 -- # uname 00:13:51.221 17:50:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.221 17:50:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1583676 00:13:51.221 17:50:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:51.221 17:50:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:51.221 17:50:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1583676' 00:13:51.221 killing process with pid 1583676 00:13:51.221 17:50:55 -- common/autotest_common.sh@945 -- # kill 1583676 00:13:51.221 17:50:55 -- common/autotest_common.sh@950 -- # wait 1583676 00:13:51.221 17:50:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.221 17:50:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.221 17:50:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.221 17:50:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.221 17:50:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.221 17:50:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.221 17:50:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.221 17:50:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.765 17:50:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:53.765 00:13:53.765 real 0m37.832s 00:13:53.765 user 1m51.816s 00:13:53.765 sys 0m7.501s 00:13:53.765 17:50:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.765 17:50:57 -- common/autotest_common.sh@10 -- # set +x 00:13:53.765 ************************************ 00:13:53.765 END TEST nvmf_rpc 00:13:53.765 ************************************ 00:13:53.765 17:50:57 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:53.765 17:50:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:53.765 17:50:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:53.765 17:50:57 -- common/autotest_common.sh@10 -- # set +x 00:13:53.765 ************************************ 00:13:53.765 START TEST nvmf_invalid 00:13:53.765 ************************************ 00:13:53.765 17:50:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:53.765 * Looking for test storage... 00:13:53.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.765 17:50:57 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.765 17:50:57 -- nvmf/common.sh@7 -- # uname -s 00:13:53.766 17:50:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.766 17:50:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.766 17:50:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.766 17:50:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.766 17:50:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.766 17:50:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.766 17:50:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.766 17:50:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.766 17:50:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.766 17:50:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.766 17:50:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:53.766 17:50:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:13:53.766 17:50:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.766 17:50:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.766 17:50:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.766 17:50:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.766 17:50:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.766 17:50:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.766 17:50:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.766 17:50:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.766 17:50:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.766 17:50:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.766 17:50:57 -- paths/export.sh@5 -- # export PATH 00:13:53.766 17:50:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.766 17:50:57 -- nvmf/common.sh@46 -- # : 0 00:13:53.766 17:50:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:53.766 17:50:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:53.766 17:50:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:53.766 17:50:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.766 17:50:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.766 17:50:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:53.766 17:50:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:53.766 17:50:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:53.766 17:50:57 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:53.766 17:50:57 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.766 17:50:57 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:53.766 17:50:57 -- target/invalid.sh@14 -- # target=foobar 00:13:53.766 17:50:57 -- target/invalid.sh@16 -- # RANDOM=0 00:13:53.766 17:50:57 -- target/invalid.sh@34 -- # nvmftestinit 00:13:53.766 17:50:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:53.766 17:50:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.766 17:50:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:53.766 17:50:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:53.766 17:50:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:53.766 17:50:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.766 17:50:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.766 17:50:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.766 17:50:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:53.766 17:50:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:53.766 17:50:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:53.766 17:50:57 -- common/autotest_common.sh@10 -- # set +x 00:14:01.957 17:51:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:01.957 17:51:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:01.957 17:51:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:01.957 17:51:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:01.957 17:51:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:01.957 17:51:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:01.957 17:51:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:01.957 17:51:05 -- nvmf/common.sh@294 -- # net_devs=() 00:14:01.957 17:51:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:01.957 17:51:05 -- nvmf/common.sh@295 -- # e810=() 00:14:01.957 17:51:05 -- nvmf/common.sh@295 -- # local -ga e810 00:14:01.957 17:51:05 -- nvmf/common.sh@296 -- # x722=() 00:14:01.957 17:51:05 -- nvmf/common.sh@296 -- # local -ga x722 00:14:01.957 17:51:05 -- nvmf/common.sh@297 -- # mlx=() 00:14:01.957 17:51:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:01.957 17:51:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.957 17:51:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:01.957 17:51:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:01.957 17:51:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:01.957 17:51:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:01.957 17:51:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:01.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:01.957 17:51:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:01.957 17:51:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:01.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:01.957 17:51:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:01.957 17:51:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:01.957 17:51:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:01.957 17:51:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.957 17:51:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:01.957 17:51:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.958 17:51:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:01.958 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:01.958 17:51:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.958 17:51:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:01.958 17:51:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.958 17:51:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:01.958 17:51:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.958 17:51:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:01.958 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:01.958 17:51:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.958 17:51:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:01.958 17:51:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:01.958 17:51:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:01.958 17:51:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:01.958 17:51:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:01.958 17:51:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.958 17:51:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.958 17:51:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.958 17:51:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:01.958 17:51:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.958 17:51:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.958 17:51:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:01.958 17:51:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.958 17:51:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.958 17:51:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:01.958 17:51:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:01.958 17:51:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.958 17:51:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.958 17:51:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.958 17:51:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.958 17:51:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:01.958 17:51:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.958 17:51:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.958 17:51:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.958 17:51:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:01.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:14:01.958 00:14:01.958 --- 10.0.0.2 ping statistics --- 00:14:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.958 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:14:01.958 17:51:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:14:01.958 00:14:01.958 --- 10.0.0.1 ping statistics --- 00:14:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.958 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:14:01.958 17:51:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.958 17:51:05 -- nvmf/common.sh@410 -- # return 0 00:14:01.958 17:51:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:01.958 17:51:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.958 17:51:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:01.958 17:51:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:01.958 17:51:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.958 17:51:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:01.958 17:51:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:01.958 17:51:05 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:01.958 17:51:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:01.958 17:51:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:01.958 17:51:05 -- common/autotest_common.sh@10 -- # set +x 00:14:01.958 17:51:05 -- nvmf/common.sh@469 -- # nvmfpid=1592839 00:14:01.958 17:51:05 -- nvmf/common.sh@470 -- # waitforlisten 1592839 00:14:01.958 17:51:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:01.958 17:51:05 -- common/autotest_common.sh@819 -- # '[' -z 1592839 ']' 00:14:01.958 17:51:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.958 17:51:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:01.958 17:51:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.958 17:51:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:01.958 17:51:05 -- common/autotest_common.sh@10 -- # set +x 00:14:01.958 [2024-07-22 17:51:05.919327] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:01.958 [2024-07-22 17:51:05.919405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.958 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.958 [2024-07-22 17:51:06.013074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.958 [2024-07-22 17:51:06.104362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:01.958 [2024-07-22 17:51:06.104532] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.958 [2024-07-22 17:51:06.104548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.958 [2024-07-22 17:51:06.104556] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.958 [2024-07-22 17:51:06.104724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.958 [2024-07-22 17:51:06.104847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.958 [2024-07-22 17:51:06.104977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.958 [2024-07-22 17:51:06.104980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.528 17:51:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:02.528 17:51:06 -- common/autotest_common.sh@852 -- # return 0 00:14:02.528 17:51:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:02.528 17:51:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:02.528 17:51:06 -- common/autotest_common.sh@10 -- # set +x 00:14:02.789 17:51:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.789 17:51:06 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:02.789 17:51:06 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12743 00:14:02.789 [2024-07-22 17:51:06.983969] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:02.789 17:51:07 -- target/invalid.sh@40 -- # out='request: 00:14:02.789 { 00:14:02.789 "nqn": "nqn.2016-06.io.spdk:cnode12743", 00:14:02.789 "tgt_name": "foobar", 00:14:02.789 "method": "nvmf_create_subsystem", 00:14:02.789 "req_id": 1 00:14:02.789 } 00:14:02.789 Got JSON-RPC error response 00:14:02.789 response: 00:14:02.789 { 00:14:02.789 "code": -32603, 00:14:02.789 "message": "Unable to find target foobar" 00:14:02.789 }' 00:14:02.789 17:51:07 -- target/invalid.sh@41 -- # [[ request: 00:14:02.789 { 00:14:02.789 "nqn": "nqn.2016-06.io.spdk:cnode12743", 00:14:02.789 "tgt_name": "foobar", 00:14:02.789 "method": "nvmf_create_subsystem", 00:14:02.789 "req_id": 1 00:14:02.789 } 00:14:02.789 Got JSON-RPC error response 00:14:02.789 response: 00:14:02.789 { 00:14:02.789 "code": -32603, 00:14:02.789 "message": "Unable to find target foobar" 00:14:02.789 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:02.789 17:51:07 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:02.789 17:51:07 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2637 00:14:03.049 [2024-07-22 17:51:07.196723] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2637: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:03.049 17:51:07 -- target/invalid.sh@45 -- # out='request: 00:14:03.049 { 00:14:03.049 "nqn": "nqn.2016-06.io.spdk:cnode2637", 00:14:03.049 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.049 "method": "nvmf_create_subsystem", 00:14:03.049 "req_id": 1 00:14:03.049 } 00:14:03.049 Got JSON-RPC error response 00:14:03.049 response: 00:14:03.049 { 00:14:03.049 "code": -32602, 00:14:03.049 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.049 }' 00:14:03.049 17:51:07 -- target/invalid.sh@46 -- # [[ request: 00:14:03.049 { 00:14:03.049 "nqn": "nqn.2016-06.io.spdk:cnode2637", 00:14:03.049 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:03.049 "method": "nvmf_create_subsystem", 00:14:03.049 "req_id": 1 00:14:03.049 } 00:14:03.049 Got JSON-RPC error response 00:14:03.049 response: 00:14:03.049 { 00:14:03.049 "code": -32602, 00:14:03.049 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:03.049 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.049 17:51:07 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:03.049 17:51:07 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30633 00:14:03.310 [2024-07-22 17:51:07.409357] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30633: invalid model number 'SPDK_Controller' 00:14:03.310 17:51:07 -- target/invalid.sh@50 -- # out='request: 00:14:03.310 { 00:14:03.310 "nqn": "nqn.2016-06.io.spdk:cnode30633", 00:14:03.310 "model_number": "SPDK_Controller\u001f", 00:14:03.310 "method": "nvmf_create_subsystem", 00:14:03.310 "req_id": 1 00:14:03.310 } 00:14:03.310 Got JSON-RPC error response 00:14:03.310 response: 00:14:03.310 { 00:14:03.310 "code": -32602, 00:14:03.310 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.310 }' 00:14:03.310 17:51:07 -- target/invalid.sh@51 -- # [[ request: 00:14:03.310 { 00:14:03.310 "nqn": "nqn.2016-06.io.spdk:cnode30633", 00:14:03.310 "model_number": "SPDK_Controller\u001f", 00:14:03.310 "method": "nvmf_create_subsystem", 00:14:03.310 "req_id": 1 00:14:03.310 } 00:14:03.310 Got JSON-RPC error response 00:14:03.310 response: 00:14:03.310 { 00:14:03.310 "code": -32602, 00:14:03.310 "message": "Invalid MN SPDK_Controller\u001f" 00:14:03.310 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:03.310 17:51:07 -- target/invalid.sh@54 -- # gen_random_s 21 00:14:03.310 17:51:07 -- target/invalid.sh@19 -- # local length=21 ll 00:14:03.310 17:51:07 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.310 17:51:07 -- target/invalid.sh@21 -- # local chars 00:14:03.310 17:51:07 -- target/invalid.sh@22 -- # local string 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 41 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=')' 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 65 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=A 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 109 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=m 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 61 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+== 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 58 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=: 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 57 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=9 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 66 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=B 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 90 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=Z 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 125 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+='}' 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 108 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=l 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 84 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=T 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 51 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+=3 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 96 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+='`' 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # printf %x 96 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:03.310 17:51:07 -- target/invalid.sh@25 -- # string+='`' 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.310 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # printf %x 112 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # string+=p 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # printf %x 123 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # string+='{' 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # printf %x 51 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # string+=3 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # printf %x 81 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # string+=Q 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.311 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # printf %x 68 00:14:03.311 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=D 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # printf %x 44 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=, 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # printf %x 66 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=B 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:14:03.571 17:51:07 -- target/invalid.sh@31 -- # echo ')Am=:9BZ}lT3``p{3QD,B' 00:14:03.571 17:51:07 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ')Am=:9BZ}lT3``p{3QD,B' nqn.2016-06.io.spdk:cnode15731 00:14:03.571 [2024-07-22 17:51:07.774517] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15731: invalid serial number ')Am=:9BZ}lT3``p{3QD,B' 00:14:03.571 17:51:07 -- target/invalid.sh@54 -- # out='request: 00:14:03.571 { 00:14:03.571 "nqn": "nqn.2016-06.io.spdk:cnode15731", 00:14:03.571 "serial_number": ")Am=:9BZ}lT3``p{3QD,B", 00:14:03.571 "method": "nvmf_create_subsystem", 00:14:03.571 "req_id": 1 00:14:03.571 } 00:14:03.571 Got JSON-RPC error response 00:14:03.571 response: 00:14:03.571 { 00:14:03.571 "code": -32602, 00:14:03.571 "message": "Invalid SN )Am=:9BZ}lT3``p{3QD,B" 00:14:03.571 }' 00:14:03.571 17:51:07 -- target/invalid.sh@55 -- # [[ request: 00:14:03.571 { 00:14:03.571 "nqn": "nqn.2016-06.io.spdk:cnode15731", 00:14:03.571 "serial_number": ")Am=:9BZ}lT3``p{3QD,B", 00:14:03.571 "method": "nvmf_create_subsystem", 00:14:03.571 "req_id": 1 00:14:03.571 } 00:14:03.571 Got JSON-RPC error response 00:14:03.571 response: 00:14:03.571 { 00:14:03.571 "code": -32602, 00:14:03.571 "message": "Invalid SN )Am=:9BZ}lT3``p{3QD,B" 00:14:03.571 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.571 17:51:07 -- target/invalid.sh@58 -- # gen_random_s 41 00:14:03.571 17:51:07 -- target/invalid.sh@19 -- # local length=41 ll 00:14:03.571 17:51:07 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.571 17:51:07 -- target/invalid.sh@21 -- # local chars 00:14:03.571 17:51:07 -- target/invalid.sh@22 -- # local string 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # printf %x 41 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=')' 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # printf %x 88 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=X 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # printf %x 121 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=y 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # printf %x 44 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:03.571 17:51:07 -- target/invalid.sh@25 -- # string+=, 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.571 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 58 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=: 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 32 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=' ' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 57 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=9 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 86 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=V 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 51 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=3 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 94 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+='^' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 93 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=']' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 62 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+='>' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 54 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=6 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 62 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+='>' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 77 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=M 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 33 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+='!' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 48 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=0 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 47 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=/ 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 113 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=q 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 80 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=P 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 93 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=']' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 82 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=R 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 32 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=' ' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 92 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+='\' 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # printf %x 113 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.833 17:51:07 -- target/invalid.sh@25 -- # string+=q 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.833 17:51:07 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.833 17:51:08 -- target/invalid.sh@25 -- # printf %x 81 00:14:03.833 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=Q 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 110 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=n 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 63 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+='?' 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 80 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=P 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 96 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+='`' 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 58 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=: 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 84 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=T 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 54 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=6 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 103 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=g 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 36 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+='$' 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 50 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=2 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 109 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=m 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 38 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+='&' 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 116 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # string+=t 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.834 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.834 17:51:08 -- target/invalid.sh@25 -- # printf %x 40 00:14:04.094 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:04.094 17:51:08 -- target/invalid.sh@25 -- # string+='(' 00:14:04.094 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.094 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.094 17:51:08 -- target/invalid.sh@25 -- # printf %x 101 00:14:04.094 17:51:08 -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:04.094 17:51:08 -- target/invalid.sh@25 -- # string+=e 00:14:04.094 17:51:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:14:04.094 17:51:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:14:04.094 17:51:08 -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:14:04.094 17:51:08 -- target/invalid.sh@31 -- # echo ')Xy,: 9V3^]>6>M!0/qP]R \qQn?P`:T6g$2m&t(e' 00:14:04.094 17:51:08 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ')Xy,: 9V3^]>6>M!0/qP]R \qQn?P`:T6g$2m&t(e' nqn.2016-06.io.spdk:cnode20372 00:14:04.094 [2024-07-22 17:51:08.292174] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20372: invalid model number ')Xy,: 9V3^]>6>M!0/qP]R \qQn?P`:T6g$2m&t(e' 00:14:04.094 17:51:08 -- target/invalid.sh@58 -- # out='request: 00:14:04.094 { 00:14:04.094 "nqn": "nqn.2016-06.io.spdk:cnode20372", 00:14:04.094 "model_number": ")Xy,: 9V3^]>6>M!0/qP]R \\qQn?P`:T6g$2m&t(e", 00:14:04.094 "method": "nvmf_create_subsystem", 00:14:04.094 "req_id": 1 00:14:04.094 } 00:14:04.094 Got JSON-RPC error response 00:14:04.094 response: 00:14:04.094 { 00:14:04.094 "code": -32602, 00:14:04.094 "message": "Invalid MN )Xy,: 9V3^]>6>M!0/qP]R \\qQn?P`:T6g$2m&t(e" 00:14:04.094 }' 00:14:04.094 17:51:08 -- target/invalid.sh@59 -- # [[ request: 00:14:04.094 { 00:14:04.094 "nqn": "nqn.2016-06.io.spdk:cnode20372", 00:14:04.094 "model_number": ")Xy,: 9V3^]>6>M!0/qP]R \\qQn?P`:T6g$2m&t(e", 00:14:04.094 "method": "nvmf_create_subsystem", 00:14:04.094 "req_id": 1 00:14:04.094 } 00:14:04.094 Got JSON-RPC error response 00:14:04.094 response: 00:14:04.094 { 00:14:04.094 "code": -32602, 00:14:04.094 "message": "Invalid MN )Xy,: 9V3^]>6>M!0/qP]R \\qQn?P`:T6g$2m&t(e" 00:14:04.094 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:04.094 17:51:08 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:04.355 [2024-07-22 17:51:08.500919] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.355 17:51:08 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:04.614 17:51:08 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:04.614 17:51:08 -- target/invalid.sh@67 -- # head -n 1 00:14:04.614 17:51:08 -- target/invalid.sh@67 -- # echo '' 00:14:04.614 17:51:08 -- target/invalid.sh@67 -- # IP= 00:14:04.615 17:51:08 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:04.874 [2024-07-22 17:51:08.910262] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:04.874 17:51:08 -- target/invalid.sh@69 -- # out='request: 00:14:04.874 { 00:14:04.874 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:04.874 "listen_address": { 00:14:04.874 "trtype": "tcp", 00:14:04.874 "traddr": "", 00:14:04.874 "trsvcid": "4421" 00:14:04.874 }, 00:14:04.874 "method": "nvmf_subsystem_remove_listener", 00:14:04.874 "req_id": 1 00:14:04.874 } 00:14:04.874 Got JSON-RPC error response 00:14:04.874 response: 00:14:04.874 { 00:14:04.874 "code": -32602, 00:14:04.874 "message": "Invalid parameters" 00:14:04.874 }' 00:14:04.874 17:51:08 -- target/invalid.sh@70 -- # [[ request: 00:14:04.874 { 00:14:04.874 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:04.874 "listen_address": { 00:14:04.874 "trtype": "tcp", 00:14:04.874 "traddr": "", 00:14:04.874 "trsvcid": "4421" 00:14:04.874 }, 00:14:04.874 "method": "nvmf_subsystem_remove_listener", 00:14:04.874 "req_id": 1 00:14:04.874 } 00:14:04.874 Got JSON-RPC error response 00:14:04.874 response: 00:14:04.874 { 00:14:04.874 "code": -32602, 00:14:04.874 "message": "Invalid parameters" 00:14:04.874 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:04.874 17:51:08 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7443 -i 0 00:14:04.874 [2024-07-22 17:51:09.114900] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7443: invalid cntlid range [0-65519] 00:14:04.874 17:51:09 -- target/invalid.sh@73 -- # out='request: 00:14:04.874 { 00:14:04.874 "nqn": "nqn.2016-06.io.spdk:cnode7443", 00:14:04.874 "min_cntlid": 0, 00:14:04.874 "method": "nvmf_create_subsystem", 00:14:04.875 "req_id": 1 00:14:04.875 } 00:14:04.875 Got JSON-RPC error response 00:14:04.875 response: 00:14:04.875 { 00:14:04.875 "code": -32602, 00:14:04.875 "message": "Invalid cntlid range [0-65519]" 00:14:04.875 }' 00:14:04.875 17:51:09 -- target/invalid.sh@74 -- # [[ request: 00:14:04.875 { 00:14:04.875 "nqn": "nqn.2016-06.io.spdk:cnode7443", 00:14:04.875 "min_cntlid": 0, 00:14:04.875 "method": "nvmf_create_subsystem", 00:14:04.875 "req_id": 1 00:14:04.875 } 00:14:04.875 Got JSON-RPC error response 00:14:04.875 response: 00:14:04.875 { 00:14:04.875 "code": -32602, 00:14:04.875 "message": "Invalid cntlid range [0-65519]" 00:14:04.875 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.134 17:51:09 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16286 -i 65520 00:14:05.134 [2024-07-22 17:51:09.291463] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16286: invalid cntlid range [65520-65519] 00:14:05.134 17:51:09 -- target/invalid.sh@75 -- # out='request: 00:14:05.134 { 00:14:05.134 "nqn": "nqn.2016-06.io.spdk:cnode16286", 00:14:05.134 "min_cntlid": 65520, 00:14:05.134 "method": "nvmf_create_subsystem", 00:14:05.134 "req_id": 1 00:14:05.134 } 00:14:05.134 Got JSON-RPC error response 00:14:05.134 response: 00:14:05.134 { 00:14:05.134 "code": -32602, 00:14:05.134 "message": "Invalid cntlid range [65520-65519]" 00:14:05.134 }' 00:14:05.134 17:51:09 -- target/invalid.sh@76 -- # [[ request: 00:14:05.134 { 00:14:05.134 "nqn": "nqn.2016-06.io.spdk:cnode16286", 00:14:05.134 "min_cntlid": 65520, 00:14:05.134 "method": "nvmf_create_subsystem", 00:14:05.134 "req_id": 1 00:14:05.134 } 00:14:05.134 Got JSON-RPC error response 00:14:05.134 response: 00:14:05.134 { 00:14:05.134 "code": -32602, 00:14:05.134 "message": "Invalid cntlid range [65520-65519]" 00:14:05.134 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.134 17:51:09 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28883 -I 0 00:14:05.394 [2024-07-22 17:51:09.500119] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28883: invalid cntlid range [1-0] 00:14:05.394 17:51:09 -- target/invalid.sh@77 -- # out='request: 00:14:05.394 { 00:14:05.394 "nqn": "nqn.2016-06.io.spdk:cnode28883", 00:14:05.394 "max_cntlid": 0, 00:14:05.394 "method": "nvmf_create_subsystem", 00:14:05.394 "req_id": 1 00:14:05.394 } 00:14:05.394 Got JSON-RPC error response 00:14:05.394 response: 00:14:05.394 { 00:14:05.394 "code": -32602, 00:14:05.394 "message": "Invalid cntlid range [1-0]" 00:14:05.394 }' 00:14:05.394 17:51:09 -- target/invalid.sh@78 -- # [[ request: 00:14:05.394 { 00:14:05.394 "nqn": "nqn.2016-06.io.spdk:cnode28883", 00:14:05.394 "max_cntlid": 0, 00:14:05.394 "method": "nvmf_create_subsystem", 00:14:05.394 "req_id": 1 00:14:05.394 } 00:14:05.394 Got JSON-RPC error response 00:14:05.394 response: 00:14:05.394 { 00:14:05.394 "code": -32602, 00:14:05.394 "message": "Invalid cntlid range [1-0]" 00:14:05.394 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.394 17:51:09 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28465 -I 65520 00:14:05.654 [2024-07-22 17:51:09.708829] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28465: invalid cntlid range [1-65520] 00:14:05.654 17:51:09 -- target/invalid.sh@79 -- # out='request: 00:14:05.654 { 00:14:05.654 "nqn": "nqn.2016-06.io.spdk:cnode28465", 00:14:05.654 "max_cntlid": 65520, 00:14:05.654 "method": "nvmf_create_subsystem", 00:14:05.654 "req_id": 1 00:14:05.654 } 00:14:05.654 Got JSON-RPC error response 00:14:05.654 response: 00:14:05.654 { 00:14:05.654 "code": -32602, 00:14:05.654 "message": "Invalid cntlid range [1-65520]" 00:14:05.654 }' 00:14:05.654 17:51:09 -- target/invalid.sh@80 -- # [[ request: 00:14:05.654 { 00:14:05.654 "nqn": "nqn.2016-06.io.spdk:cnode28465", 00:14:05.654 "max_cntlid": 65520, 00:14:05.654 "method": "nvmf_create_subsystem", 00:14:05.654 "req_id": 1 00:14:05.654 } 00:14:05.654 Got JSON-RPC error response 00:14:05.654 response: 00:14:05.654 { 00:14:05.655 "code": -32602, 00:14:05.655 "message": "Invalid cntlid range [1-65520]" 00:14:05.655 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.655 17:51:09 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23406 -i 6 -I 5 00:14:05.655 [2024-07-22 17:51:09.913541] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23406: invalid cntlid range [6-5] 00:14:05.915 17:51:09 -- target/invalid.sh@83 -- # out='request: 00:14:05.915 { 00:14:05.915 "nqn": "nqn.2016-06.io.spdk:cnode23406", 00:14:05.915 "min_cntlid": 6, 00:14:05.915 "max_cntlid": 5, 00:14:05.915 "method": "nvmf_create_subsystem", 00:14:05.915 "req_id": 1 00:14:05.915 } 00:14:05.915 Got JSON-RPC error response 00:14:05.915 response: 00:14:05.915 { 00:14:05.915 "code": -32602, 00:14:05.915 "message": "Invalid cntlid range [6-5]" 00:14:05.915 }' 00:14:05.915 17:51:09 -- target/invalid.sh@84 -- # [[ request: 00:14:05.915 { 00:14:05.915 "nqn": "nqn.2016-06.io.spdk:cnode23406", 00:14:05.915 "min_cntlid": 6, 00:14:05.915 "max_cntlid": 5, 00:14:05.915 "method": "nvmf_create_subsystem", 00:14:05.915 "req_id": 1 00:14:05.915 } 00:14:05.915 Got JSON-RPC error response 00:14:05.915 response: 00:14:05.915 { 00:14:05.915 "code": -32602, 00:14:05.915 "message": "Invalid cntlid range [6-5]" 00:14:05.915 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.915 17:51:09 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:05.915 17:51:10 -- target/invalid.sh@87 -- # out='request: 00:14:05.915 { 00:14:05.915 "name": "foobar", 00:14:05.915 "method": "nvmf_delete_target", 00:14:05.915 "req_id": 1 00:14:05.915 } 00:14:05.915 Got JSON-RPC error response 00:14:05.915 response: 00:14:05.915 { 00:14:05.915 "code": -32602, 00:14:05.915 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:05.915 }' 00:14:05.915 17:51:10 -- target/invalid.sh@88 -- # [[ request: 00:14:05.915 { 00:14:05.915 "name": "foobar", 00:14:05.915 "method": "nvmf_delete_target", 00:14:05.915 "req_id": 1 00:14:05.915 } 00:14:05.915 Got JSON-RPC error response 00:14:05.915 response: 00:14:05.915 { 00:14:05.915 "code": -32602, 00:14:05.915 "message": "The specified target doesn't exist, cannot delete it." 00:14:05.915 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:05.915 17:51:10 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:05.915 17:51:10 -- target/invalid.sh@91 -- # nvmftestfini 00:14:05.915 17:51:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.915 17:51:10 -- nvmf/common.sh@116 -- # sync 00:14:05.915 17:51:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.915 17:51:10 -- nvmf/common.sh@119 -- # set +e 00:14:05.915 17:51:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.915 17:51:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.915 rmmod nvme_tcp 00:14:05.915 rmmod nvme_fabrics 00:14:05.915 rmmod nvme_keyring 00:14:05.915 17:51:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.915 17:51:10 -- nvmf/common.sh@123 -- # set -e 00:14:05.915 17:51:10 -- nvmf/common.sh@124 -- # return 0 00:14:05.915 17:51:10 -- nvmf/common.sh@477 -- # '[' -n 1592839 ']' 00:14:05.915 17:51:10 -- nvmf/common.sh@478 -- # killprocess 1592839 00:14:05.915 17:51:10 -- common/autotest_common.sh@926 -- # '[' -z 1592839 ']' 00:14:05.915 17:51:10 -- common/autotest_common.sh@930 -- # kill -0 1592839 00:14:05.915 17:51:10 -- common/autotest_common.sh@931 -- # uname 00:14:05.915 17:51:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.915 17:51:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1592839 00:14:05.915 17:51:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:05.915 17:51:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:05.915 17:51:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1592839' 00:14:05.915 killing process with pid 1592839 00:14:05.915 17:51:10 -- common/autotest_common.sh@945 -- # kill 1592839 00:14:05.915 17:51:10 -- common/autotest_common.sh@950 -- # wait 1592839 00:14:06.176 17:51:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:06.176 17:51:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:06.176 17:51:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:06.176 17:51:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:06.176 17:51:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:06.176 17:51:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.176 17:51:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.176 17:51:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.721 17:51:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:08.721 00:14:08.721 real 0m14.782s 00:14:08.721 user 0m21.742s 00:14:08.721 sys 0m6.950s 00:14:08.721 17:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.721 17:51:12 -- common/autotest_common.sh@10 -- # set +x 00:14:08.721 ************************************ 00:14:08.721 END TEST nvmf_invalid 00:14:08.721 ************************************ 00:14:08.721 17:51:12 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:08.721 17:51:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:08.721 17:51:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:08.721 17:51:12 -- common/autotest_common.sh@10 -- # set +x 00:14:08.721 ************************************ 00:14:08.721 START TEST nvmf_abort 00:14:08.721 ************************************ 00:14:08.721 17:51:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:08.721 * Looking for test storage... 00:14:08.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.721 17:51:12 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.721 17:51:12 -- nvmf/common.sh@7 -- # uname -s 00:14:08.721 17:51:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.721 17:51:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.721 17:51:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.721 17:51:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.721 17:51:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.721 17:51:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.721 17:51:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.721 17:51:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.721 17:51:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.721 17:51:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.721 17:51:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:08.721 17:51:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:08.721 17:51:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.721 17:51:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.721 17:51:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.721 17:51:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.721 17:51:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.721 17:51:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.721 17:51:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.721 17:51:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.721 17:51:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.721 17:51:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.721 17:51:12 -- paths/export.sh@5 -- # export PATH 00:14:08.721 17:51:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.721 17:51:12 -- nvmf/common.sh@46 -- # : 0 00:14:08.721 17:51:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:08.721 17:51:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:08.721 17:51:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:08.721 17:51:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.721 17:51:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.721 17:51:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:08.721 17:51:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:08.721 17:51:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:08.721 17:51:12 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.721 17:51:12 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:08.721 17:51:12 -- target/abort.sh@14 -- # nvmftestinit 00:14:08.721 17:51:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:08.721 17:51:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.722 17:51:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:08.722 17:51:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:08.722 17:51:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:08.722 17:51:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.722 17:51:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.722 17:51:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.722 17:51:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:08.722 17:51:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:08.722 17:51:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:08.722 17:51:12 -- common/autotest_common.sh@10 -- # set +x 00:14:16.861 17:51:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:16.861 17:51:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:16.861 17:51:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:16.861 17:51:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:16.861 17:51:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:16.861 17:51:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:16.861 17:51:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:16.861 17:51:20 -- nvmf/common.sh@294 -- # net_devs=() 00:14:16.861 17:51:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:16.861 17:51:20 -- nvmf/common.sh@295 -- # e810=() 00:14:16.861 17:51:20 -- nvmf/common.sh@295 -- # local -ga e810 00:14:16.861 17:51:20 -- nvmf/common.sh@296 -- # x722=() 00:14:16.861 17:51:20 -- nvmf/common.sh@296 -- # local -ga x722 00:14:16.861 17:51:20 -- nvmf/common.sh@297 -- # mlx=() 00:14:16.861 17:51:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:16.861 17:51:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.861 17:51:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.862 17:51:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:16.862 17:51:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:16.862 17:51:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:16.862 17:51:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:16.862 17:51:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:16.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:16.862 17:51:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:16.862 17:51:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:16.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:16.862 17:51:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:16.862 17:51:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:16.862 17:51:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.862 17:51:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:16.862 17:51:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.862 17:51:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:16.862 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:16.862 17:51:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.862 17:51:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:16.862 17:51:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.862 17:51:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:16.862 17:51:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.862 17:51:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:16.862 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:16.862 17:51:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.862 17:51:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:16.862 17:51:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:16.862 17:51:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:16.862 17:51:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.862 17:51:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.862 17:51:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.862 17:51:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:16.862 17:51:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.862 17:51:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.862 17:51:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:16.862 17:51:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.862 17:51:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.862 17:51:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:16.862 17:51:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:16.862 17:51:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.862 17:51:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.862 17:51:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.862 17:51:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.862 17:51:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:16.862 17:51:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.862 17:51:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.862 17:51:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.862 17:51:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:16.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:14:16.862 00:14:16.862 --- 10.0.0.2 ping statistics --- 00:14:16.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.862 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:14:16.862 17:51:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:14:16.862 00:14:16.862 --- 10.0.0.1 ping statistics --- 00:14:16.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.862 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:14:16.862 17:51:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.862 17:51:20 -- nvmf/common.sh@410 -- # return 0 00:14:16.862 17:51:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:16.862 17:51:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.862 17:51:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:16.862 17:51:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.862 17:51:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:16.862 17:51:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:16.862 17:51:20 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:16.862 17:51:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:16.862 17:51:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:16.862 17:51:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.862 17:51:20 -- nvmf/common.sh@469 -- # nvmfpid=1598619 00:14:16.862 17:51:20 -- nvmf/common.sh@470 -- # waitforlisten 1598619 00:14:16.862 17:51:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:16.862 17:51:20 -- common/autotest_common.sh@819 -- # '[' -z 1598619 ']' 00:14:16.862 17:51:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.862 17:51:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:16.862 17:51:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.862 17:51:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:16.862 17:51:20 -- common/autotest_common.sh@10 -- # set +x 00:14:16.862 [2024-07-22 17:51:20.807507] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:16.862 [2024-07-22 17:51:20.807571] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.862 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.862 [2024-07-22 17:51:20.883267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.862 [2024-07-22 17:51:20.953402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:16.862 [2024-07-22 17:51:20.953527] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.862 [2024-07-22 17:51:20.953535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.862 [2024-07-22 17:51:20.953542] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.862 [2024-07-22 17:51:20.953685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.862 [2024-07-22 17:51:20.953790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.862 [2024-07-22 17:51:20.953792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.433 17:51:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:17.433 17:51:21 -- common/autotest_common.sh@852 -- # return 0 00:14:17.433 17:51:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:17.433 17:51:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:17.433 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.433 17:51:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.433 17:51:21 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:17.433 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.433 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.433 [2024-07-22 17:51:21.708687] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.694 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.694 17:51:21 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:17.694 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.694 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 Malloc0 00:14:17.694 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.694 17:51:21 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:17.694 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.694 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.694 Delay0 00:14:17.694 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.694 17:51:21 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:17.694 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.695 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.695 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.695 17:51:21 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:17.695 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.695 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.695 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.695 17:51:21 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:17.695 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.695 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.695 [2024-07-22 17:51:21.785336] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.695 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.695 17:51:21 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:17.695 17:51:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.695 17:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:17.695 17:51:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.695 17:51:21 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:17.695 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.695 [2024-07-22 17:51:21.914656] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:20.239 Initializing NVMe Controllers 00:14:20.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:20.239 controller IO queue size 128 less than required 00:14:20.239 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:20.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:20.239 Initialization complete. Launching workers. 00:14:20.239 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36436 00:14:20.239 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36497, failed to submit 62 00:14:20.239 success 36436, unsuccess 61, failed 0 00:14:20.239 17:51:23 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.239 17:51:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.239 17:51:23 -- common/autotest_common.sh@10 -- # set +x 00:14:20.239 17:51:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.239 17:51:24 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:20.239 17:51:24 -- target/abort.sh@38 -- # nvmftestfini 00:14:20.239 17:51:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:20.239 17:51:24 -- nvmf/common.sh@116 -- # sync 00:14:20.239 17:51:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:20.239 17:51:24 -- nvmf/common.sh@119 -- # set +e 00:14:20.239 17:51:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:20.239 17:51:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:20.239 rmmod nvme_tcp 00:14:20.239 rmmod nvme_fabrics 00:14:20.239 rmmod nvme_keyring 00:14:20.239 17:51:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:20.239 17:51:24 -- nvmf/common.sh@123 -- # set -e 00:14:20.239 17:51:24 -- nvmf/common.sh@124 -- # return 0 00:14:20.239 17:51:24 -- nvmf/common.sh@477 -- # '[' -n 1598619 ']' 00:14:20.239 17:51:24 -- nvmf/common.sh@478 -- # killprocess 1598619 00:14:20.239 17:51:24 -- common/autotest_common.sh@926 -- # '[' -z 1598619 ']' 00:14:20.239 17:51:24 -- common/autotest_common.sh@930 -- # kill -0 1598619 00:14:20.239 17:51:24 -- common/autotest_common.sh@931 -- # uname 00:14:20.239 17:51:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.239 17:51:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1598619 00:14:20.239 17:51:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:20.239 17:51:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:20.239 17:51:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1598619' 00:14:20.239 killing process with pid 1598619 00:14:20.239 17:51:24 -- common/autotest_common.sh@945 -- # kill 1598619 00:14:20.239 17:51:24 -- common/autotest_common.sh@950 -- # wait 1598619 00:14:20.239 17:51:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:20.239 17:51:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:20.239 17:51:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:20.239 17:51:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.239 17:51:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:20.239 17:51:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.239 17:51:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.239 17:51:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.152 17:51:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:22.152 00:14:22.152 real 0m13.906s 00:14:22.152 user 0m14.072s 00:14:22.152 sys 0m6.916s 00:14:22.152 17:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.152 17:51:26 -- common/autotest_common.sh@10 -- # set +x 00:14:22.152 ************************************ 00:14:22.152 END TEST nvmf_abort 00:14:22.152 ************************************ 00:14:22.152 17:51:26 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:22.152 17:51:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:22.152 17:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.152 17:51:26 -- common/autotest_common.sh@10 -- # set +x 00:14:22.152 ************************************ 00:14:22.152 START TEST nvmf_ns_hotplug_stress 00:14:22.152 ************************************ 00:14:22.152 17:51:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:22.413 * Looking for test storage... 00:14:22.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.414 17:51:26 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.414 17:51:26 -- nvmf/common.sh@7 -- # uname -s 00:14:22.414 17:51:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.414 17:51:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.414 17:51:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.414 17:51:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.414 17:51:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.414 17:51:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.414 17:51:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.414 17:51:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.414 17:51:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.414 17:51:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.414 17:51:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:22.414 17:51:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:14:22.414 17:51:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.414 17:51:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.414 17:51:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.414 17:51:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.414 17:51:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.414 17:51:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.414 17:51:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.414 17:51:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.414 17:51:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.414 17:51:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.414 17:51:26 -- paths/export.sh@5 -- # export PATH 00:14:22.414 17:51:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.414 17:51:26 -- nvmf/common.sh@46 -- # : 0 00:14:22.414 17:51:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:22.414 17:51:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:22.414 17:51:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:22.414 17:51:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.414 17:51:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.414 17:51:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:22.414 17:51:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:22.414 17:51:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:22.414 17:51:26 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.414 17:51:26 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:22.414 17:51:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:22.414 17:51:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.414 17:51:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:22.414 17:51:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:22.414 17:51:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:22.414 17:51:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.414 17:51:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.414 17:51:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.414 17:51:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:22.414 17:51:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:22.414 17:51:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:22.414 17:51:26 -- common/autotest_common.sh@10 -- # set +x 00:14:30.558 17:51:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:30.558 17:51:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:30.558 17:51:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:30.558 17:51:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:30.558 17:51:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:30.558 17:51:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:30.558 17:51:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:30.558 17:51:34 -- nvmf/common.sh@294 -- # net_devs=() 00:14:30.558 17:51:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:30.558 17:51:34 -- nvmf/common.sh@295 -- # e810=() 00:14:30.558 17:51:34 -- nvmf/common.sh@295 -- # local -ga e810 00:14:30.558 17:51:34 -- nvmf/common.sh@296 -- # x722=() 00:14:30.559 17:51:34 -- nvmf/common.sh@296 -- # local -ga x722 00:14:30.559 17:51:34 -- nvmf/common.sh@297 -- # mlx=() 00:14:30.559 17:51:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:30.559 17:51:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.559 17:51:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:30.559 17:51:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:30.559 17:51:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:30.559 17:51:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.559 17:51:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:30.559 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:30.559 17:51:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.559 17:51:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:30.559 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:30.559 17:51:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:30.559 17:51:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.559 17:51:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.559 17:51:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.559 17:51:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.559 17:51:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:30.559 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:30.559 17:51:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.559 17:51:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.559 17:51:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.559 17:51:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.559 17:51:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.559 17:51:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:30.559 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:30.559 17:51:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.559 17:51:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:30.559 17:51:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:30.559 17:51:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:30.559 17:51:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.559 17:51:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.559 17:51:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.559 17:51:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:30.559 17:51:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.559 17:51:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.559 17:51:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:30.559 17:51:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.559 17:51:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.559 17:51:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:30.559 17:51:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:30.559 17:51:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.559 17:51:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.559 17:51:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.559 17:51:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.559 17:51:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:30.559 17:51:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.559 17:51:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.559 17:51:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.559 17:51:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:30.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:14:30.559 00:14:30.559 --- 10.0.0.2 ping statistics --- 00:14:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.559 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:14:30.559 17:51:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:14:30.559 00:14:30.559 --- 10.0.0.1 ping statistics --- 00:14:30.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.559 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:14:30.559 17:51:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.559 17:51:34 -- nvmf/common.sh@410 -- # return 0 00:14:30.559 17:51:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.559 17:51:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.559 17:51:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.559 17:51:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.559 17:51:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.559 17:51:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.559 17:51:34 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:30.559 17:51:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.559 17:51:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:30.559 17:51:34 -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 17:51:34 -- nvmf/common.sh@469 -- # nvmfpid=1603724 00:14:30.559 17:51:34 -- nvmf/common.sh@470 -- # waitforlisten 1603724 00:14:30.559 17:51:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:30.559 17:51:34 -- common/autotest_common.sh@819 -- # '[' -z 1603724 ']' 00:14:30.559 17:51:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.559 17:51:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.559 17:51:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.559 17:51:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.559 17:51:34 -- common/autotest_common.sh@10 -- # set +x 00:14:30.820 [2024-07-22 17:51:34.846243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:30.820 [2024-07-22 17:51:34.846303] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.820 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.820 [2024-07-22 17:51:34.922333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.820 [2024-07-22 17:51:34.990858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.820 [2024-07-22 17:51:34.990987] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.820 [2024-07-22 17:51:34.990995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.820 [2024-07-22 17:51:34.991002] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.820 [2024-07-22 17:51:34.991128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.820 [2024-07-22 17:51:34.991244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.820 [2024-07-22 17:51:34.991246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.761 17:51:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.761 17:51:35 -- common/autotest_common.sh@852 -- # return 0 00:14:31.761 17:51:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.761 17:51:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:31.761 17:51:35 -- common/autotest_common.sh@10 -- # set +x 00:14:31.761 17:51:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.761 17:51:35 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:31.761 17:51:35 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:31.761 [2024-07-22 17:51:35.910234] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.761 17:51:35 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:32.022 17:51:36 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.282 [2024-07-22 17:51:36.315654] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.282 17:51:36 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:32.282 17:51:36 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:32.543 Malloc0 00:14:32.543 17:51:36 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:32.803 Delay0 00:14:32.803 17:51:36 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.063 17:51:37 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:33.324 NULL1 00:14:33.324 17:51:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:33.324 17:51:37 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1604155 00:14:33.324 17:51:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:33.324 17:51:37 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:33.324 17:51:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.584 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.584 17:51:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.845 17:51:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:33.845 17:51:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:34.105 true 00:14:34.105 17:51:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:34.105 17:51:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.365 17:51:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.365 17:51:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:34.365 17:51:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:34.625 true 00:14:34.625 17:51:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:34.625 17:51:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.886 17:51:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.147 17:51:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:35.147 17:51:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:35.147 true 00:14:35.408 17:51:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:35.408 17:51:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.408 17:51:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.669 17:51:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:35.669 17:51:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:35.930 true 00:14:35.930 17:51:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:35.930 17:51:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.269 17:51:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.269 17:51:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:36.269 17:51:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:36.535 true 00:14:36.535 17:51:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:36.535 17:51:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.796 17:51:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.796 17:51:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:36.796 17:51:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:37.057 true 00:14:37.057 17:51:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:37.057 17:51:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.317 17:51:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.577 17:51:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:37.577 17:51:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:37.837 true 00:14:37.837 17:51:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:37.837 17:51:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.837 17:51:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.097 17:51:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:38.097 17:51:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:38.357 true 00:14:38.357 17:51:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:38.357 17:51:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.617 17:51:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.879 17:51:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:38.879 17:51:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:38.879 true 00:14:38.879 17:51:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:38.879 17:51:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.178 17:51:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.438 17:51:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:39.438 17:51:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:39.698 true 00:14:39.698 17:51:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:39.698 17:51:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.698 17:51:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.959 17:51:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:39.959 17:51:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:40.220 true 00:14:40.220 17:51:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:40.220 17:51:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.480 17:51:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.741 17:51:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:40.741 17:51:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:40.741 true 00:14:40.741 17:51:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:40.741 17:51:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.002 17:51:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.262 17:51:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:41.262 17:51:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:41.522 true 00:14:41.522 17:51:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:41.522 17:51:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.782 17:51:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.782 17:51:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:41.782 17:51:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:42.042 true 00:14:42.042 17:51:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:42.042 17:51:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.302 17:51:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.562 17:51:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:42.562 17:51:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:42.562 true 00:14:42.823 17:51:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:42.823 17:51:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.823 17:51:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.083 17:51:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:43.083 17:51:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:43.342 true 00:14:43.342 17:51:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:43.342 17:51:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.603 17:51:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.863 17:51:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:43.863 17:51:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:43.863 true 00:14:43.863 17:51:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:43.863 17:51:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.123 17:51:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.384 17:51:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:44.384 17:51:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:44.644 true 00:14:44.644 17:51:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:44.644 17:51:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.644 17:51:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.905 17:51:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:44.905 17:51:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:45.166 true 00:14:45.166 17:51:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:45.166 17:51:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.426 17:51:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.686 17:51:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:45.686 17:51:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:45.686 true 00:14:45.686 17:51:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:45.686 17:51:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.947 17:51:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.207 17:51:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:46.207 17:51:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:46.468 true 00:14:46.468 17:51:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:46.468 17:51:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.468 17:51:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.728 17:51:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:46.728 17:51:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:46.989 true 00:14:46.989 17:51:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:46.989 17:51:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.249 17:51:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.510 17:51:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:47.510 17:51:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:47.510 true 00:14:47.510 17:51:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:47.510 17:51:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.770 17:51:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.032 17:51:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:48.032 17:51:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:48.292 true 00:14:48.292 17:51:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:48.292 17:51:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.553 17:51:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.553 17:51:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:48.553 17:51:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:48.813 true 00:14:48.813 17:51:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:48.813 17:51:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.073 17:51:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.333 17:51:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:49.333 17:51:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:49.333 true 00:14:49.594 17:51:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:49.594 17:51:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.594 17:51:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.854 17:51:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:49.854 17:51:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:50.115 true 00:14:50.115 17:51:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:50.115 17:51:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.375 17:51:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.375 17:51:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:50.375 17:51:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:50.635 true 00:14:50.635 17:51:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:50.635 17:51:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.895 17:51:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.156 17:51:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:51.156 17:51:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:51.156 true 00:14:51.417 17:51:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:51.417 17:51:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.417 17:51:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.677 17:51:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:51.677 17:51:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:51.937 true 00:14:51.937 17:51:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:51.937 17:51:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.198 17:51:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.198 17:51:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:52.198 17:51:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:52.458 true 00:14:52.458 17:51:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:52.458 17:51:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.719 17:51:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.980 17:51:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:52.980 17:51:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:52.980 true 00:14:53.240 17:51:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:53.240 17:51:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.240 17:51:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.501 17:51:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:53.501 17:51:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:53.761 true 00:14:53.761 17:51:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:53.761 17:51:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.022 17:51:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.283 17:51:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:54.283 17:51:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:54.283 true 00:14:54.283 17:51:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:54.283 17:51:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.543 17:51:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.804 17:51:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:54.804 17:51:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:55.064 true 00:14:55.064 17:51:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:55.064 17:51:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.064 17:51:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.324 17:51:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:55.324 17:51:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:55.585 true 00:14:55.585 17:51:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:55.585 17:51:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.848 17:51:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.191 17:52:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:56.191 17:52:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:56.191 true 00:14:56.191 17:52:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:56.191 17:52:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.451 17:52:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.451 17:52:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:56.451 17:52:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:56.712 true 00:14:56.712 17:52:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:56.712 17:52:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.973 17:52:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.233 17:52:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:57.233 17:52:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:57.233 true 00:14:57.493 17:52:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:57.493 17:52:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.493 17:52:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.753 17:52:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:57.753 17:52:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:58.014 true 00:14:58.014 17:52:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:58.014 17:52:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.274 17:52:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.274 17:52:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:58.274 17:52:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:58.535 true 00:14:58.535 17:52:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:58.535 17:52:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.797 17:52:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.061 17:52:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:59.061 17:52:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:59.061 true 00:14:59.061 17:52:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:59.061 17:52:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.322 17:52:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.583 17:52:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:59.583 17:52:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:59.843 true 00:14:59.843 17:52:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:14:59.843 17:52:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.843 17:52:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.104 17:52:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:15:00.104 17:52:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:15:00.364 true 00:15:00.364 17:52:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:00.364 17:52:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.625 17:52:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.884 17:52:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:15:00.884 17:52:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:15:00.884 true 00:15:00.884 17:52:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:00.884 17:52:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.146 17:52:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.407 17:52:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:15:01.407 17:52:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:15:01.668 true 00:15:01.668 17:52:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:01.668 17:52:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.668 17:52:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.929 17:52:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:15:01.929 17:52:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:15:02.189 true 00:15:02.189 17:52:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:02.189 17:52:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.449 17:52:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.709 17:52:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:15:02.709 17:52:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:15:02.709 true 00:15:02.709 17:52:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:02.709 17:52:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.970 17:52:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.231 17:52:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:15:03.231 17:52:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:15:03.492 true 00:15:03.492 17:52:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:03.492 17:52:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.492 17:52:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.753 Initializing NVMe Controllers 00:15:03.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.753 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:15:03.753 Controller IO queue size 128, less than required. 00:15:03.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:03.753 WARNING: Some requested NVMe devices were skipped 00:15:03.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:03.753 Initialization complete. Launching workers. 00:15:03.753 ======================================================== 00:15:03.753 Latency(us) 00:15:03.753 Device Information : IOPS MiB/s Average min max 00:15:03.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 25728.08 12.56 4975.14 1661.21 9464.20 00:15:03.753 ======================================================== 00:15:03.753 Total : 25728.08 12.56 4975.14 1661.21 9464.20 00:15:03.753 00:15:03.753 17:52:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:15:03.753 17:52:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:15:04.014 true 00:15:04.014 17:52:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1604155 00:15:04.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1604155) - No such process 00:15:04.014 17:52:08 -- target/ns_hotplug_stress.sh@53 -- # wait 1604155 00:15:04.014 17:52:08 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.274 17:52:08 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:04.274 17:52:08 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:04.274 17:52:08 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:04.274 17:52:08 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:04.274 17:52:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:04.274 17:52:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:04.534 null0 00:15:04.534 17:52:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:04.534 17:52:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:04.535 17:52:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:04.804 null1 00:15:04.804 17:52:08 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:04.804 17:52:08 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:04.804 17:52:08 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:04.804 null2 00:15:05.069 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:05.069 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:05.069 17:52:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:05.069 null3 00:15:05.069 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:05.069 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:05.069 17:52:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:05.329 null4 00:15:05.329 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:05.329 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:05.329 17:52:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:05.590 null5 00:15:05.590 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:05.590 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:05.590 17:52:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:05.590 null6 00:15:05.590 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:05.590 17:52:09 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:05.590 17:52:09 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:05.850 null7 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:05.850 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@66 -- # wait 1609774 1609775 1609777 1609779 1609781 1609783 1609784 1609786 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.851 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:06.111 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:06.112 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.372 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:06.373 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.633 17:52:10 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:06.894 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:07.155 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.416 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:07.677 17:52:11 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:11 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:07.938 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.198 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.199 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.459 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:08.720 17:52:12 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:08.980 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.241 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:09.501 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.501 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:09.502 17:52:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:09.763 17:52:13 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:09.763 17:52:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:09.763 17:52:13 -- nvmf/common.sh@116 -- # sync 00:15:09.763 17:52:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:09.763 17:52:13 -- nvmf/common.sh@119 -- # set +e 00:15:09.763 17:52:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:09.763 17:52:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:09.763 rmmod nvme_tcp 00:15:09.763 rmmod nvme_fabrics 00:15:09.763 rmmod nvme_keyring 00:15:10.023 17:52:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:10.023 17:52:14 -- nvmf/common.sh@123 -- # set -e 00:15:10.023 17:52:14 -- nvmf/common.sh@124 -- # return 0 00:15:10.023 17:52:14 -- nvmf/common.sh@477 -- # '[' -n 1603724 ']' 00:15:10.023 17:52:14 -- nvmf/common.sh@478 -- # killprocess 1603724 00:15:10.023 17:52:14 -- common/autotest_common.sh@926 -- # '[' -z 1603724 ']' 00:15:10.023 17:52:14 -- common/autotest_common.sh@930 -- # kill -0 1603724 00:15:10.023 17:52:14 -- common/autotest_common.sh@931 -- # uname 00:15:10.023 17:52:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:10.023 17:52:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1603724 00:15:10.023 17:52:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:10.023 17:52:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:10.023 17:52:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1603724' 00:15:10.023 killing process with pid 1603724 00:15:10.023 17:52:14 -- common/autotest_common.sh@945 -- # kill 1603724 00:15:10.023 17:52:14 -- common/autotest_common.sh@950 -- # wait 1603724 00:15:10.023 17:52:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:10.023 17:52:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:10.023 17:52:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:10.023 17:52:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.023 17:52:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:10.024 17:52:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.024 17:52:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.024 17:52:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.569 17:52:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:12.569 00:15:12.569 real 0m49.914s 00:15:12.569 user 3m22.166s 00:15:12.569 sys 0m17.627s 00:15:12.569 17:52:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.569 17:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.569 ************************************ 00:15:12.569 END TEST nvmf_ns_hotplug_stress 00:15:12.569 ************************************ 00:15:12.569 17:52:16 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:12.569 17:52:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:12.569 17:52:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.569 17:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:12.569 ************************************ 00:15:12.569 START TEST nvmf_connect_stress 00:15:12.569 ************************************ 00:15:12.569 17:52:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:12.569 * Looking for test storage... 00:15:12.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.569 17:52:16 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.569 17:52:16 -- nvmf/common.sh@7 -- # uname -s 00:15:12.569 17:52:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.569 17:52:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.569 17:52:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.569 17:52:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.569 17:52:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.569 17:52:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.569 17:52:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.569 17:52:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.569 17:52:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.569 17:52:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.569 17:52:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:12.569 17:52:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:12.569 17:52:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.569 17:52:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.569 17:52:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.569 17:52:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.569 17:52:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.569 17:52:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.569 17:52:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.569 17:52:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.569 17:52:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.569 17:52:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.569 17:52:16 -- paths/export.sh@5 -- # export PATH 00:15:12.569 17:52:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.569 17:52:16 -- nvmf/common.sh@46 -- # : 0 00:15:12.569 17:52:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:12.569 17:52:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:12.569 17:52:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:12.569 17:52:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.569 17:52:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.569 17:52:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:12.569 17:52:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:12.569 17:52:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:12.569 17:52:16 -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:12.569 17:52:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:12.569 17:52:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.569 17:52:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:12.569 17:52:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:12.569 17:52:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:12.569 17:52:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.569 17:52:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.569 17:52:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.569 17:52:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:12.569 17:52:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:12.569 17:52:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:12.569 17:52:16 -- common/autotest_common.sh@10 -- # set +x 00:15:20.711 17:52:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:20.711 17:52:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:20.711 17:52:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:20.711 17:52:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:20.711 17:52:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:20.711 17:52:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:20.711 17:52:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:20.711 17:52:24 -- nvmf/common.sh@294 -- # net_devs=() 00:15:20.711 17:52:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:20.711 17:52:24 -- nvmf/common.sh@295 -- # e810=() 00:15:20.711 17:52:24 -- nvmf/common.sh@295 -- # local -ga e810 00:15:20.711 17:52:24 -- nvmf/common.sh@296 -- # x722=() 00:15:20.711 17:52:24 -- nvmf/common.sh@296 -- # local -ga x722 00:15:20.711 17:52:24 -- nvmf/common.sh@297 -- # mlx=() 00:15:20.711 17:52:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:20.711 17:52:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.711 17:52:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:20.711 17:52:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:20.711 17:52:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:20.711 17:52:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:20.711 17:52:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:20.711 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:20.711 17:52:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:20.711 17:52:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:20.711 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:20.711 17:52:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:20.711 17:52:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:20.711 17:52:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:20.711 17:52:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.711 17:52:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:20.711 17:52:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.711 17:52:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:20.711 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:20.711 17:52:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.711 17:52:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:20.711 17:52:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.711 17:52:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:20.711 17:52:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.712 17:52:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:20.712 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:20.712 17:52:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.712 17:52:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:20.712 17:52:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:20.712 17:52:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:20.712 17:52:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:20.712 17:52:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:20.712 17:52:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.712 17:52:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.712 17:52:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.712 17:52:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:20.712 17:52:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.712 17:52:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.712 17:52:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:20.712 17:52:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.712 17:52:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.712 17:52:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:20.712 17:52:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:20.712 17:52:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.712 17:52:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.712 17:52:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.712 17:52:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.712 17:52:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:20.712 17:52:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.712 17:52:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.712 17:52:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.712 17:52:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:20.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:15:20.712 00:15:20.712 --- 10.0.0.2 ping statistics --- 00:15:20.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.712 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:15:20.712 17:52:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:15:20.712 00:15:20.712 --- 10.0.0.1 ping statistics --- 00:15:20.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.712 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:20.712 17:52:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.712 17:52:24 -- nvmf/common.sh@410 -- # return 0 00:15:20.712 17:52:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:20.712 17:52:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.712 17:52:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:20.712 17:52:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:20.712 17:52:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.712 17:52:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:20.712 17:52:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:20.712 17:52:24 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:20.712 17:52:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:20.712 17:52:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:20.712 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:20.712 17:52:24 -- nvmf/common.sh@469 -- # nvmfpid=1615063 00:15:20.712 17:52:24 -- nvmf/common.sh@470 -- # waitforlisten 1615063 00:15:20.712 17:52:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:20.712 17:52:24 -- common/autotest_common.sh@819 -- # '[' -z 1615063 ']' 00:15:20.712 17:52:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.712 17:52:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:20.712 17:52:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.712 17:52:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:20.712 17:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:20.712 [2024-07-22 17:52:24.544874] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:20.712 [2024-07-22 17:52:24.544937] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.712 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.712 [2024-07-22 17:52:24.618422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.712 [2024-07-22 17:52:24.687761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:20.712 [2024-07-22 17:52:24.687881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.712 [2024-07-22 17:52:24.687889] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.712 [2024-07-22 17:52:24.687896] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.712 [2024-07-22 17:52:24.688024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.712 [2024-07-22 17:52:24.688143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.712 [2024-07-22 17:52:24.688145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.283 17:52:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:21.283 17:52:25 -- common/autotest_common.sh@852 -- # return 0 00:15:21.283 17:52:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:21.283 17:52:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:21.283 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:21.283 17:52:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.283 17:52:25 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:21.283 17:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.283 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:21.283 [2024-07-22 17:52:25.434579] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.283 17:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.283 17:52:25 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:21.283 17:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.283 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:21.283 17:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.283 17:52:25 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.283 17:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.283 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:21.283 [2024-07-22 17:52:25.458764] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.283 17:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.283 17:52:25 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:21.283 17:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.283 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:21.283 NULL1 00:15:21.283 17:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.283 17:52:25 -- target/connect_stress.sh@21 -- # PERF_PID=1615292 00:15:21.283 17:52:25 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:21.283 17:52:25 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:21.283 17:52:25 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # seq 1 20 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.283 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.283 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.544 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.544 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.544 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.544 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.544 17:52:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:21.544 17:52:25 -- target/connect_stress.sh@28 -- # cat 00:15:21.544 17:52:25 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:21.544 17:52:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.544 17:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.544 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:21.804 17:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.804 17:52:25 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:21.804 17:52:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.804 17:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.804 17:52:25 -- common/autotest_common.sh@10 -- # set +x 00:15:22.066 17:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.066 17:52:26 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:22.066 17:52:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.066 17:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.066 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:22.326 17:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.326 17:52:26 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:22.326 17:52:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.326 17:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.326 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:22.614 17:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.614 17:52:26 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:22.614 17:52:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.614 17:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.614 17:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:23.194 17:52:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.194 17:52:27 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:23.194 17:52:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.194 17:52:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.195 17:52:27 -- common/autotest_common.sh@10 -- # set +x 00:15:23.456 17:52:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.456 17:52:27 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:23.456 17:52:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.456 17:52:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.456 17:52:27 -- common/autotest_common.sh@10 -- # set +x 00:15:23.716 17:52:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.716 17:52:27 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:23.716 17:52:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.716 17:52:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.716 17:52:27 -- common/autotest_common.sh@10 -- # set +x 00:15:23.977 17:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:23.977 17:52:28 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:23.977 17:52:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.977 17:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:23.977 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:15:24.237 17:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:24.237 17:52:28 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:24.237 17:52:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.237 17:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:24.237 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:15:24.808 17:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:24.808 17:52:28 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:24.808 17:52:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.808 17:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:24.808 17:52:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.069 17:52:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.069 17:52:29 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:25.069 17:52:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.069 17:52:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.069 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:15:25.330 17:52:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.330 17:52:29 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:25.330 17:52:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.330 17:52:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.330 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:15:25.590 17:52:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:25.590 17:52:29 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:25.590 17:52:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.590 17:52:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:25.590 17:52:29 -- common/autotest_common.sh@10 -- # set +x 00:15:26.160 17:52:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.160 17:52:30 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:26.160 17:52:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.160 17:52:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.160 17:52:30 -- common/autotest_common.sh@10 -- # set +x 00:15:26.421 17:52:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.421 17:52:30 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:26.421 17:52:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.421 17:52:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.421 17:52:30 -- common/autotest_common.sh@10 -- # set +x 00:15:26.681 17:52:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.681 17:52:30 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:26.681 17:52:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.681 17:52:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.681 17:52:30 -- common/autotest_common.sh@10 -- # set +x 00:15:26.943 17:52:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.943 17:52:31 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:26.943 17:52:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.943 17:52:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.943 17:52:31 -- common/autotest_common.sh@10 -- # set +x 00:15:27.203 17:52:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.203 17:52:31 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:27.203 17:52:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.203 17:52:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.203 17:52:31 -- common/autotest_common.sh@10 -- # set +x 00:15:27.774 17:52:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:27.774 17:52:31 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:27.774 17:52:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.774 17:52:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:27.774 17:52:31 -- common/autotest_common.sh@10 -- # set +x 00:15:28.034 17:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.034 17:52:32 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:28.034 17:52:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.034 17:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.034 17:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:28.294 17:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.294 17:52:32 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:28.294 17:52:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.294 17:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.294 17:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:28.553 17:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.553 17:52:32 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:28.553 17:52:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.553 17:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.553 17:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:28.813 17:52:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:28.813 17:52:33 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:28.813 17:52:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.813 17:52:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:28.813 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:15:29.382 17:52:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.382 17:52:33 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:29.382 17:52:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.382 17:52:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.382 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:15:29.643 17:52:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.643 17:52:33 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:29.643 17:52:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.643 17:52:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.643 17:52:33 -- common/autotest_common.sh@10 -- # set +x 00:15:29.903 17:52:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.903 17:52:34 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:29.903 17:52:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.903 17:52:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.903 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.163 17:52:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.163 17:52:34 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:30.163 17:52:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.163 17:52:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.163 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.424 17:52:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.424 17:52:34 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:30.424 17:52:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.424 17:52:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.424 17:52:34 -- common/autotest_common.sh@10 -- # set +x 00:15:30.995 17:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.995 17:52:35 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:30.995 17:52:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.995 17:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.995 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:15:31.258 17:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.258 17:52:35 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:31.258 17:52:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.258 17:52:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:31.258 17:52:35 -- common/autotest_common.sh@10 -- # set +x 00:15:31.518 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.518 17:52:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:31.518 17:52:35 -- target/connect_stress.sh@34 -- # kill -0 1615292 00:15:31.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1615292) - No such process 00:15:31.518 17:52:35 -- target/connect_stress.sh@38 -- # wait 1615292 00:15:31.518 17:52:35 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:31.518 17:52:35 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:31.518 17:52:35 -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:31.518 17:52:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:31.518 17:52:35 -- nvmf/common.sh@116 -- # sync 00:15:31.518 17:52:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:31.518 17:52:35 -- nvmf/common.sh@119 -- # set +e 00:15:31.518 17:52:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:31.518 17:52:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:31.518 rmmod nvme_tcp 00:15:31.518 rmmod nvme_fabrics 00:15:31.518 rmmod nvme_keyring 00:15:31.518 17:52:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:31.518 17:52:35 -- nvmf/common.sh@123 -- # set -e 00:15:31.518 17:52:35 -- nvmf/common.sh@124 -- # return 0 00:15:31.518 17:52:35 -- nvmf/common.sh@477 -- # '[' -n 1615063 ']' 00:15:31.518 17:52:35 -- nvmf/common.sh@478 -- # killprocess 1615063 00:15:31.518 17:52:35 -- common/autotest_common.sh@926 -- # '[' -z 1615063 ']' 00:15:31.518 17:52:35 -- common/autotest_common.sh@930 -- # kill -0 1615063 00:15:31.518 17:52:35 -- common/autotest_common.sh@931 -- # uname 00:15:31.518 17:52:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:31.518 17:52:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1615063 00:15:31.779 17:52:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:31.779 17:52:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:31.779 17:52:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1615063' 00:15:31.779 killing process with pid 1615063 00:15:31.779 17:52:35 -- common/autotest_common.sh@945 -- # kill 1615063 00:15:31.779 17:52:35 -- common/autotest_common.sh@950 -- # wait 1615063 00:15:31.779 17:52:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:31.779 17:52:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:31.779 17:52:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:31.779 17:52:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.779 17:52:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:31.779 17:52:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.779 17:52:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.779 17:52:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.324 17:52:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:34.324 00:15:34.324 real 0m21.663s 00:15:34.324 user 0m42.626s 00:15:34.324 sys 0m9.192s 00:15:34.324 17:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.324 17:52:37 -- common/autotest_common.sh@10 -- # set +x 00:15:34.324 ************************************ 00:15:34.324 END TEST nvmf_connect_stress 00:15:34.324 ************************************ 00:15:34.324 17:52:38 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:34.324 17:52:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:34.324 17:52:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:34.324 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:15:34.324 ************************************ 00:15:34.324 START TEST nvmf_fused_ordering 00:15:34.324 ************************************ 00:15:34.324 17:52:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:34.324 * Looking for test storage... 00:15:34.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.324 17:52:38 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.324 17:52:38 -- nvmf/common.sh@7 -- # uname -s 00:15:34.324 17:52:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.324 17:52:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.324 17:52:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.324 17:52:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.324 17:52:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.324 17:52:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.324 17:52:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.324 17:52:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.324 17:52:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.324 17:52:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.324 17:52:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:34.324 17:52:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:34.324 17:52:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.324 17:52:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.324 17:52:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.324 17:52:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.324 17:52:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.324 17:52:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.324 17:52:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.324 17:52:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.324 17:52:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.325 17:52:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.325 17:52:38 -- paths/export.sh@5 -- # export PATH 00:15:34.325 17:52:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.325 17:52:38 -- nvmf/common.sh@46 -- # : 0 00:15:34.325 17:52:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:34.325 17:52:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:34.325 17:52:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:34.325 17:52:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.325 17:52:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.325 17:52:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:34.325 17:52:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:34.325 17:52:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:34.325 17:52:38 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:34.325 17:52:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:34.325 17:52:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.325 17:52:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:34.325 17:52:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:34.325 17:52:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:34.325 17:52:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.325 17:52:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:34.325 17:52:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.325 17:52:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:34.325 17:52:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:34.325 17:52:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:34.325 17:52:38 -- common/autotest_common.sh@10 -- # set +x 00:15:42.472 17:52:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:42.472 17:52:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:42.472 17:52:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:42.472 17:52:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:42.472 17:52:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:42.472 17:52:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:42.472 17:52:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:42.472 17:52:45 -- nvmf/common.sh@294 -- # net_devs=() 00:15:42.472 17:52:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:42.472 17:52:45 -- nvmf/common.sh@295 -- # e810=() 00:15:42.472 17:52:45 -- nvmf/common.sh@295 -- # local -ga e810 00:15:42.472 17:52:45 -- nvmf/common.sh@296 -- # x722=() 00:15:42.472 17:52:45 -- nvmf/common.sh@296 -- # local -ga x722 00:15:42.472 17:52:45 -- nvmf/common.sh@297 -- # mlx=() 00:15:42.472 17:52:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:42.472 17:52:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.472 17:52:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:42.472 17:52:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:42.472 17:52:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:42.472 17:52:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.472 17:52:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:42.472 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:42.472 17:52:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.472 17:52:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:42.472 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:42.472 17:52:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:42.472 17:52:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.472 17:52:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.472 17:52:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.472 17:52:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.472 17:52:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:42.472 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:42.472 17:52:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.472 17:52:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.472 17:52:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.472 17:52:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.472 17:52:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.472 17:52:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:42.472 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:42.472 17:52:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.472 17:52:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:42.472 17:52:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:42.472 17:52:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:42.472 17:52:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:42.472 17:52:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.472 17:52:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.472 17:52:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.472 17:52:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:42.472 17:52:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.472 17:52:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.472 17:52:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:42.472 17:52:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.472 17:52:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.472 17:52:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:42.472 17:52:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:42.472 17:52:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.472 17:52:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.472 17:52:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.472 17:52:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.472 17:52:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:42.472 17:52:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.472 17:52:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.473 17:52:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.473 17:52:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:42.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:15:42.473 00:15:42.473 --- 10.0.0.2 ping statistics --- 00:15:42.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.473 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:15:42.473 17:52:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:15:42.473 00:15:42.473 --- 10.0.0.1 ping statistics --- 00:15:42.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.473 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:15:42.473 17:52:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.473 17:52:46 -- nvmf/common.sh@410 -- # return 0 00:15:42.473 17:52:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.473 17:52:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.473 17:52:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.473 17:52:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.473 17:52:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.473 17:52:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.473 17:52:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.473 17:52:46 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:42.473 17:52:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.473 17:52:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:42.473 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.473 17:52:46 -- nvmf/common.sh@469 -- # nvmfpid=1621466 00:15:42.473 17:52:46 -- nvmf/common.sh@470 -- # waitforlisten 1621466 00:15:42.473 17:52:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:42.473 17:52:46 -- common/autotest_common.sh@819 -- # '[' -z 1621466 ']' 00:15:42.473 17:52:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.473 17:52:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.473 17:52:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.473 17:52:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.473 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.473 [2024-07-22 17:52:46.199294] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:42.473 [2024-07-22 17:52:46.199428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.473 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.473 [2024-07-22 17:52:46.330612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.473 [2024-07-22 17:52:46.389987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.473 [2024-07-22 17:52:46.390099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.473 [2024-07-22 17:52:46.390107] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.473 [2024-07-22 17:52:46.390113] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.473 [2024-07-22 17:52:46.390131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.733 17:52:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:42.733 17:52:46 -- common/autotest_common.sh@852 -- # return 0 00:15:42.733 17:52:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:42.733 17:52:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:42.733 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.733 17:52:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.733 17:52:46 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:42.733 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.733 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.733 [2024-07-22 17:52:46.981649] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.733 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.733 17:52:46 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:42.733 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.733 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.733 17:52:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.734 17:52:46 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.734 17:52:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.734 17:52:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.734 [2024-07-22 17:52:47.005805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.994 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.994 17:52:47 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:42.994 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.994 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 NULL1 00:15:42.994 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.994 17:52:47 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:42.994 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.994 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.994 17:52:47 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:42.994 17:52:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:42.994 17:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:42.994 17:52:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:42.994 17:52:47 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:42.994 [2024-07-22 17:52:47.061233] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:42.995 [2024-07-22 17:52:47.061299] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1621518 ] 00:15:42.995 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.256 Attached to nqn.2016-06.io.spdk:cnode1 00:15:43.256 Namespace ID: 1 size: 1GB 00:15:43.256 fused_ordering(0) 00:15:43.256 fused_ordering(1) 00:15:43.256 fused_ordering(2) 00:15:43.256 fused_ordering(3) 00:15:43.256 fused_ordering(4) 00:15:43.256 fused_ordering(5) 00:15:43.256 fused_ordering(6) 00:15:43.256 fused_ordering(7) 00:15:43.256 fused_ordering(8) 00:15:43.256 fused_ordering(9) 00:15:43.256 fused_ordering(10) 00:15:43.256 fused_ordering(11) 00:15:43.256 fused_ordering(12) 00:15:43.256 fused_ordering(13) 00:15:43.256 fused_ordering(14) 00:15:43.256 fused_ordering(15) 00:15:43.256 fused_ordering(16) 00:15:43.256 fused_ordering(17) 00:15:43.256 fused_ordering(18) 00:15:43.256 fused_ordering(19) 00:15:43.256 fused_ordering(20) 00:15:43.256 fused_ordering(21) 00:15:43.256 fused_ordering(22) 00:15:43.256 fused_ordering(23) 00:15:43.256 fused_ordering(24) 00:15:43.256 fused_ordering(25) 00:15:43.256 fused_ordering(26) 00:15:43.256 fused_ordering(27) 00:15:43.256 fused_ordering(28) 00:15:43.256 fused_ordering(29) 00:15:43.256 fused_ordering(30) 00:15:43.256 fused_ordering(31) 00:15:43.256 fused_ordering(32) 00:15:43.256 fused_ordering(33) 00:15:43.256 fused_ordering(34) 00:15:43.256 fused_ordering(35) 00:15:43.256 fused_ordering(36) 00:15:43.256 fused_ordering(37) 00:15:43.256 fused_ordering(38) 00:15:43.256 fused_ordering(39) 00:15:43.256 fused_ordering(40) 00:15:43.256 fused_ordering(41) 00:15:43.256 fused_ordering(42) 00:15:43.256 fused_ordering(43) 00:15:43.256 fused_ordering(44) 00:15:43.256 fused_ordering(45) 00:15:43.256 fused_ordering(46) 00:15:43.256 fused_ordering(47) 00:15:43.256 fused_ordering(48) 00:15:43.256 fused_ordering(49) 00:15:43.257 fused_ordering(50) 00:15:43.257 fused_ordering(51) 00:15:43.257 fused_ordering(52) 00:15:43.257 fused_ordering(53) 00:15:43.257 fused_ordering(54) 00:15:43.257 fused_ordering(55) 00:15:43.257 fused_ordering(56) 00:15:43.257 fused_ordering(57) 00:15:43.257 fused_ordering(58) 00:15:43.257 fused_ordering(59) 00:15:43.257 fused_ordering(60) 00:15:43.257 fused_ordering(61) 00:15:43.257 fused_ordering(62) 00:15:43.257 fused_ordering(63) 00:15:43.257 fused_ordering(64) 00:15:43.257 fused_ordering(65) 00:15:43.257 fused_ordering(66) 00:15:43.257 fused_ordering(67) 00:15:43.257 fused_ordering(68) 00:15:43.257 fused_ordering(69) 00:15:43.257 fused_ordering(70) 00:15:43.257 fused_ordering(71) 00:15:43.257 fused_ordering(72) 00:15:43.257 fused_ordering(73) 00:15:43.257 fused_ordering(74) 00:15:43.257 fused_ordering(75) 00:15:43.257 fused_ordering(76) 00:15:43.257 fused_ordering(77) 00:15:43.257 fused_ordering(78) 00:15:43.257 fused_ordering(79) 00:15:43.257 fused_ordering(80) 00:15:43.257 fused_ordering(81) 00:15:43.257 fused_ordering(82) 00:15:43.257 fused_ordering(83) 00:15:43.257 fused_ordering(84) 00:15:43.257 fused_ordering(85) 00:15:43.257 fused_ordering(86) 00:15:43.257 fused_ordering(87) 00:15:43.257 fused_ordering(88) 00:15:43.257 fused_ordering(89) 00:15:43.257 fused_ordering(90) 00:15:43.257 fused_ordering(91) 00:15:43.257 fused_ordering(92) 00:15:43.257 fused_ordering(93) 00:15:43.257 fused_ordering(94) 00:15:43.257 fused_ordering(95) 00:15:43.257 fused_ordering(96) 00:15:43.257 fused_ordering(97) 00:15:43.257 fused_ordering(98) 00:15:43.257 fused_ordering(99) 00:15:43.257 fused_ordering(100) 00:15:43.257 fused_ordering(101) 00:15:43.257 fused_ordering(102) 00:15:43.257 fused_ordering(103) 00:15:43.257 fused_ordering(104) 00:15:43.257 fused_ordering(105) 00:15:43.257 fused_ordering(106) 00:15:43.257 fused_ordering(107) 00:15:43.257 fused_ordering(108) 00:15:43.257 fused_ordering(109) 00:15:43.257 fused_ordering(110) 00:15:43.257 fused_ordering(111) 00:15:43.257 fused_ordering(112) 00:15:43.257 fused_ordering(113) 00:15:43.257 fused_ordering(114) 00:15:43.257 fused_ordering(115) 00:15:43.257 fused_ordering(116) 00:15:43.257 fused_ordering(117) 00:15:43.257 fused_ordering(118) 00:15:43.257 fused_ordering(119) 00:15:43.257 fused_ordering(120) 00:15:43.257 fused_ordering(121) 00:15:43.257 fused_ordering(122) 00:15:43.257 fused_ordering(123) 00:15:43.257 fused_ordering(124) 00:15:43.257 fused_ordering(125) 00:15:43.257 fused_ordering(126) 00:15:43.257 fused_ordering(127) 00:15:43.257 fused_ordering(128) 00:15:43.257 fused_ordering(129) 00:15:43.257 fused_ordering(130) 00:15:43.257 fused_ordering(131) 00:15:43.257 fused_ordering(132) 00:15:43.257 fused_ordering(133) 00:15:43.257 fused_ordering(134) 00:15:43.257 fused_ordering(135) 00:15:43.257 fused_ordering(136) 00:15:43.257 fused_ordering(137) 00:15:43.257 fused_ordering(138) 00:15:43.257 fused_ordering(139) 00:15:43.257 fused_ordering(140) 00:15:43.257 fused_ordering(141) 00:15:43.257 fused_ordering(142) 00:15:43.257 fused_ordering(143) 00:15:43.257 fused_ordering(144) 00:15:43.257 fused_ordering(145) 00:15:43.257 fused_ordering(146) 00:15:43.257 fused_ordering(147) 00:15:43.257 fused_ordering(148) 00:15:43.257 fused_ordering(149) 00:15:43.257 fused_ordering(150) 00:15:43.257 fused_ordering(151) 00:15:43.257 fused_ordering(152) 00:15:43.257 fused_ordering(153) 00:15:43.257 fused_ordering(154) 00:15:43.257 fused_ordering(155) 00:15:43.257 fused_ordering(156) 00:15:43.257 fused_ordering(157) 00:15:43.257 fused_ordering(158) 00:15:43.257 fused_ordering(159) 00:15:43.257 fused_ordering(160) 00:15:43.257 fused_ordering(161) 00:15:43.257 fused_ordering(162) 00:15:43.257 fused_ordering(163) 00:15:43.257 fused_ordering(164) 00:15:43.257 fused_ordering(165) 00:15:43.257 fused_ordering(166) 00:15:43.257 fused_ordering(167) 00:15:43.257 fused_ordering(168) 00:15:43.257 fused_ordering(169) 00:15:43.257 fused_ordering(170) 00:15:43.257 fused_ordering(171) 00:15:43.257 fused_ordering(172) 00:15:43.257 fused_ordering(173) 00:15:43.257 fused_ordering(174) 00:15:43.257 fused_ordering(175) 00:15:43.257 fused_ordering(176) 00:15:43.257 fused_ordering(177) 00:15:43.257 fused_ordering(178) 00:15:43.257 fused_ordering(179) 00:15:43.257 fused_ordering(180) 00:15:43.257 fused_ordering(181) 00:15:43.257 fused_ordering(182) 00:15:43.257 fused_ordering(183) 00:15:43.257 fused_ordering(184) 00:15:43.257 fused_ordering(185) 00:15:43.257 fused_ordering(186) 00:15:43.257 fused_ordering(187) 00:15:43.257 fused_ordering(188) 00:15:43.257 fused_ordering(189) 00:15:43.257 fused_ordering(190) 00:15:43.257 fused_ordering(191) 00:15:43.257 fused_ordering(192) 00:15:43.257 fused_ordering(193) 00:15:43.257 fused_ordering(194) 00:15:43.257 fused_ordering(195) 00:15:43.257 fused_ordering(196) 00:15:43.257 fused_ordering(197) 00:15:43.257 fused_ordering(198) 00:15:43.257 fused_ordering(199) 00:15:43.257 fused_ordering(200) 00:15:43.257 fused_ordering(201) 00:15:43.257 fused_ordering(202) 00:15:43.257 fused_ordering(203) 00:15:43.257 fused_ordering(204) 00:15:43.257 fused_ordering(205) 00:15:43.518 fused_ordering(206) 00:15:43.518 fused_ordering(207) 00:15:43.518 fused_ordering(208) 00:15:43.518 fused_ordering(209) 00:15:43.518 fused_ordering(210) 00:15:43.518 fused_ordering(211) 00:15:43.519 fused_ordering(212) 00:15:43.519 fused_ordering(213) 00:15:43.519 fused_ordering(214) 00:15:43.519 fused_ordering(215) 00:15:43.519 fused_ordering(216) 00:15:43.519 fused_ordering(217) 00:15:43.519 fused_ordering(218) 00:15:43.519 fused_ordering(219) 00:15:43.519 fused_ordering(220) 00:15:43.519 fused_ordering(221) 00:15:43.519 fused_ordering(222) 00:15:43.519 fused_ordering(223) 00:15:43.519 fused_ordering(224) 00:15:43.519 fused_ordering(225) 00:15:43.519 fused_ordering(226) 00:15:43.519 fused_ordering(227) 00:15:43.519 fused_ordering(228) 00:15:43.519 fused_ordering(229) 00:15:43.519 fused_ordering(230) 00:15:43.519 fused_ordering(231) 00:15:43.519 fused_ordering(232) 00:15:43.519 fused_ordering(233) 00:15:43.519 fused_ordering(234) 00:15:43.519 fused_ordering(235) 00:15:43.519 fused_ordering(236) 00:15:43.519 fused_ordering(237) 00:15:43.519 fused_ordering(238) 00:15:43.519 fused_ordering(239) 00:15:43.519 fused_ordering(240) 00:15:43.519 fused_ordering(241) 00:15:43.519 fused_ordering(242) 00:15:43.519 fused_ordering(243) 00:15:43.519 fused_ordering(244) 00:15:43.519 fused_ordering(245) 00:15:43.519 fused_ordering(246) 00:15:43.519 fused_ordering(247) 00:15:43.519 fused_ordering(248) 00:15:43.519 fused_ordering(249) 00:15:43.519 fused_ordering(250) 00:15:43.519 fused_ordering(251) 00:15:43.519 fused_ordering(252) 00:15:43.519 fused_ordering(253) 00:15:43.519 fused_ordering(254) 00:15:43.519 fused_ordering(255) 00:15:43.519 fused_ordering(256) 00:15:43.519 fused_ordering(257) 00:15:43.519 fused_ordering(258) 00:15:43.519 fused_ordering(259) 00:15:43.519 fused_ordering(260) 00:15:43.519 fused_ordering(261) 00:15:43.519 fused_ordering(262) 00:15:43.519 fused_ordering(263) 00:15:43.519 fused_ordering(264) 00:15:43.519 fused_ordering(265) 00:15:43.519 fused_ordering(266) 00:15:43.519 fused_ordering(267) 00:15:43.519 fused_ordering(268) 00:15:43.519 fused_ordering(269) 00:15:43.519 fused_ordering(270) 00:15:43.519 fused_ordering(271) 00:15:43.519 fused_ordering(272) 00:15:43.519 fused_ordering(273) 00:15:43.519 fused_ordering(274) 00:15:43.519 fused_ordering(275) 00:15:43.519 fused_ordering(276) 00:15:43.519 fused_ordering(277) 00:15:43.519 fused_ordering(278) 00:15:43.519 fused_ordering(279) 00:15:43.519 fused_ordering(280) 00:15:43.519 fused_ordering(281) 00:15:43.519 fused_ordering(282) 00:15:43.519 fused_ordering(283) 00:15:43.519 fused_ordering(284) 00:15:43.519 fused_ordering(285) 00:15:43.519 fused_ordering(286) 00:15:43.519 fused_ordering(287) 00:15:43.519 fused_ordering(288) 00:15:43.519 fused_ordering(289) 00:15:43.519 fused_ordering(290) 00:15:43.519 fused_ordering(291) 00:15:43.519 fused_ordering(292) 00:15:43.519 fused_ordering(293) 00:15:43.519 fused_ordering(294) 00:15:43.519 fused_ordering(295) 00:15:43.519 fused_ordering(296) 00:15:43.519 fused_ordering(297) 00:15:43.519 fused_ordering(298) 00:15:43.519 fused_ordering(299) 00:15:43.519 fused_ordering(300) 00:15:43.519 fused_ordering(301) 00:15:43.519 fused_ordering(302) 00:15:43.519 fused_ordering(303) 00:15:43.519 fused_ordering(304) 00:15:43.519 fused_ordering(305) 00:15:43.519 fused_ordering(306) 00:15:43.519 fused_ordering(307) 00:15:43.519 fused_ordering(308) 00:15:43.519 fused_ordering(309) 00:15:43.519 fused_ordering(310) 00:15:43.519 fused_ordering(311) 00:15:43.519 fused_ordering(312) 00:15:43.519 fused_ordering(313) 00:15:43.519 fused_ordering(314) 00:15:43.519 fused_ordering(315) 00:15:43.519 fused_ordering(316) 00:15:43.519 fused_ordering(317) 00:15:43.519 fused_ordering(318) 00:15:43.519 fused_ordering(319) 00:15:43.519 fused_ordering(320) 00:15:43.519 fused_ordering(321) 00:15:43.519 fused_ordering(322) 00:15:43.519 fused_ordering(323) 00:15:43.519 fused_ordering(324) 00:15:43.519 fused_ordering(325) 00:15:43.519 fused_ordering(326) 00:15:43.519 fused_ordering(327) 00:15:43.519 fused_ordering(328) 00:15:43.519 fused_ordering(329) 00:15:43.519 fused_ordering(330) 00:15:43.519 fused_ordering(331) 00:15:43.519 fused_ordering(332) 00:15:43.519 fused_ordering(333) 00:15:43.519 fused_ordering(334) 00:15:43.519 fused_ordering(335) 00:15:43.519 fused_ordering(336) 00:15:43.519 fused_ordering(337) 00:15:43.519 fused_ordering(338) 00:15:43.519 fused_ordering(339) 00:15:43.519 fused_ordering(340) 00:15:43.519 fused_ordering(341) 00:15:43.519 fused_ordering(342) 00:15:43.519 fused_ordering(343) 00:15:43.519 fused_ordering(344) 00:15:43.519 fused_ordering(345) 00:15:43.519 fused_ordering(346) 00:15:43.519 fused_ordering(347) 00:15:43.519 fused_ordering(348) 00:15:43.519 fused_ordering(349) 00:15:43.519 fused_ordering(350) 00:15:43.519 fused_ordering(351) 00:15:43.519 fused_ordering(352) 00:15:43.519 fused_ordering(353) 00:15:43.519 fused_ordering(354) 00:15:43.519 fused_ordering(355) 00:15:43.519 fused_ordering(356) 00:15:43.519 fused_ordering(357) 00:15:43.519 fused_ordering(358) 00:15:43.519 fused_ordering(359) 00:15:43.519 fused_ordering(360) 00:15:43.519 fused_ordering(361) 00:15:43.519 fused_ordering(362) 00:15:43.519 fused_ordering(363) 00:15:43.519 fused_ordering(364) 00:15:43.519 fused_ordering(365) 00:15:43.519 fused_ordering(366) 00:15:43.519 fused_ordering(367) 00:15:43.519 fused_ordering(368) 00:15:43.519 fused_ordering(369) 00:15:43.519 fused_ordering(370) 00:15:43.519 fused_ordering(371) 00:15:43.519 fused_ordering(372) 00:15:43.519 fused_ordering(373) 00:15:43.519 fused_ordering(374) 00:15:43.519 fused_ordering(375) 00:15:43.519 fused_ordering(376) 00:15:43.519 fused_ordering(377) 00:15:43.519 fused_ordering(378) 00:15:43.519 fused_ordering(379) 00:15:43.519 fused_ordering(380) 00:15:43.519 fused_ordering(381) 00:15:43.519 fused_ordering(382) 00:15:43.519 fused_ordering(383) 00:15:43.519 fused_ordering(384) 00:15:43.519 fused_ordering(385) 00:15:43.519 fused_ordering(386) 00:15:43.519 fused_ordering(387) 00:15:43.519 fused_ordering(388) 00:15:43.519 fused_ordering(389) 00:15:43.519 fused_ordering(390) 00:15:43.519 fused_ordering(391) 00:15:43.519 fused_ordering(392) 00:15:43.519 fused_ordering(393) 00:15:43.519 fused_ordering(394) 00:15:43.519 fused_ordering(395) 00:15:43.519 fused_ordering(396) 00:15:43.519 fused_ordering(397) 00:15:43.519 fused_ordering(398) 00:15:43.519 fused_ordering(399) 00:15:43.519 fused_ordering(400) 00:15:43.519 fused_ordering(401) 00:15:43.519 fused_ordering(402) 00:15:43.519 fused_ordering(403) 00:15:43.519 fused_ordering(404) 00:15:43.519 fused_ordering(405) 00:15:43.519 fused_ordering(406) 00:15:43.519 fused_ordering(407) 00:15:43.519 fused_ordering(408) 00:15:43.519 fused_ordering(409) 00:15:43.519 fused_ordering(410) 00:15:44.091 fused_ordering(411) 00:15:44.091 fused_ordering(412) 00:15:44.091 fused_ordering(413) 00:15:44.091 fused_ordering(414) 00:15:44.091 fused_ordering(415) 00:15:44.091 fused_ordering(416) 00:15:44.091 fused_ordering(417) 00:15:44.091 fused_ordering(418) 00:15:44.091 fused_ordering(419) 00:15:44.091 fused_ordering(420) 00:15:44.091 fused_ordering(421) 00:15:44.091 fused_ordering(422) 00:15:44.091 fused_ordering(423) 00:15:44.091 fused_ordering(424) 00:15:44.091 fused_ordering(425) 00:15:44.091 fused_ordering(426) 00:15:44.091 fused_ordering(427) 00:15:44.091 fused_ordering(428) 00:15:44.091 fused_ordering(429) 00:15:44.091 fused_ordering(430) 00:15:44.091 fused_ordering(431) 00:15:44.091 fused_ordering(432) 00:15:44.091 fused_ordering(433) 00:15:44.091 fused_ordering(434) 00:15:44.091 fused_ordering(435) 00:15:44.091 fused_ordering(436) 00:15:44.091 fused_ordering(437) 00:15:44.091 fused_ordering(438) 00:15:44.091 fused_ordering(439) 00:15:44.091 fused_ordering(440) 00:15:44.091 fused_ordering(441) 00:15:44.091 fused_ordering(442) 00:15:44.091 fused_ordering(443) 00:15:44.091 fused_ordering(444) 00:15:44.091 fused_ordering(445) 00:15:44.091 fused_ordering(446) 00:15:44.091 fused_ordering(447) 00:15:44.091 fused_ordering(448) 00:15:44.091 fused_ordering(449) 00:15:44.091 fused_ordering(450) 00:15:44.091 fused_ordering(451) 00:15:44.091 fused_ordering(452) 00:15:44.091 fused_ordering(453) 00:15:44.091 fused_ordering(454) 00:15:44.091 fused_ordering(455) 00:15:44.091 fused_ordering(456) 00:15:44.091 fused_ordering(457) 00:15:44.091 fused_ordering(458) 00:15:44.091 fused_ordering(459) 00:15:44.091 fused_ordering(460) 00:15:44.091 fused_ordering(461) 00:15:44.091 fused_ordering(462) 00:15:44.091 fused_ordering(463) 00:15:44.091 fused_ordering(464) 00:15:44.091 fused_ordering(465) 00:15:44.091 fused_ordering(466) 00:15:44.091 fused_ordering(467) 00:15:44.091 fused_ordering(468) 00:15:44.091 fused_ordering(469) 00:15:44.091 fused_ordering(470) 00:15:44.091 fused_ordering(471) 00:15:44.091 fused_ordering(472) 00:15:44.091 fused_ordering(473) 00:15:44.091 fused_ordering(474) 00:15:44.091 fused_ordering(475) 00:15:44.091 fused_ordering(476) 00:15:44.092 fused_ordering(477) 00:15:44.092 fused_ordering(478) 00:15:44.092 fused_ordering(479) 00:15:44.092 fused_ordering(480) 00:15:44.092 fused_ordering(481) 00:15:44.092 fused_ordering(482) 00:15:44.092 fused_ordering(483) 00:15:44.092 fused_ordering(484) 00:15:44.092 fused_ordering(485) 00:15:44.092 fused_ordering(486) 00:15:44.092 fused_ordering(487) 00:15:44.092 fused_ordering(488) 00:15:44.092 fused_ordering(489) 00:15:44.092 fused_ordering(490) 00:15:44.092 fused_ordering(491) 00:15:44.092 fused_ordering(492) 00:15:44.092 fused_ordering(493) 00:15:44.092 fused_ordering(494) 00:15:44.092 fused_ordering(495) 00:15:44.092 fused_ordering(496) 00:15:44.092 fused_ordering(497) 00:15:44.092 fused_ordering(498) 00:15:44.092 fused_ordering(499) 00:15:44.092 fused_ordering(500) 00:15:44.092 fused_ordering(501) 00:15:44.092 fused_ordering(502) 00:15:44.092 fused_ordering(503) 00:15:44.092 fused_ordering(504) 00:15:44.092 fused_ordering(505) 00:15:44.092 fused_ordering(506) 00:15:44.092 fused_ordering(507) 00:15:44.092 fused_ordering(508) 00:15:44.092 fused_ordering(509) 00:15:44.092 fused_ordering(510) 00:15:44.092 fused_ordering(511) 00:15:44.092 fused_ordering(512) 00:15:44.092 fused_ordering(513) 00:15:44.092 fused_ordering(514) 00:15:44.092 fused_ordering(515) 00:15:44.092 fused_ordering(516) 00:15:44.092 fused_ordering(517) 00:15:44.092 fused_ordering(518) 00:15:44.092 fused_ordering(519) 00:15:44.092 fused_ordering(520) 00:15:44.092 fused_ordering(521) 00:15:44.092 fused_ordering(522) 00:15:44.092 fused_ordering(523) 00:15:44.092 fused_ordering(524) 00:15:44.092 fused_ordering(525) 00:15:44.092 fused_ordering(526) 00:15:44.092 fused_ordering(527) 00:15:44.092 fused_ordering(528) 00:15:44.092 fused_ordering(529) 00:15:44.092 fused_ordering(530) 00:15:44.092 fused_ordering(531) 00:15:44.092 fused_ordering(532) 00:15:44.092 fused_ordering(533) 00:15:44.092 fused_ordering(534) 00:15:44.092 fused_ordering(535) 00:15:44.092 fused_ordering(536) 00:15:44.092 fused_ordering(537) 00:15:44.092 fused_ordering(538) 00:15:44.092 fused_ordering(539) 00:15:44.092 fused_ordering(540) 00:15:44.092 fused_ordering(541) 00:15:44.092 fused_ordering(542) 00:15:44.092 fused_ordering(543) 00:15:44.092 fused_ordering(544) 00:15:44.092 fused_ordering(545) 00:15:44.092 fused_ordering(546) 00:15:44.092 fused_ordering(547) 00:15:44.092 fused_ordering(548) 00:15:44.092 fused_ordering(549) 00:15:44.092 fused_ordering(550) 00:15:44.092 fused_ordering(551) 00:15:44.092 fused_ordering(552) 00:15:44.092 fused_ordering(553) 00:15:44.092 fused_ordering(554) 00:15:44.092 fused_ordering(555) 00:15:44.092 fused_ordering(556) 00:15:44.092 fused_ordering(557) 00:15:44.092 fused_ordering(558) 00:15:44.092 fused_ordering(559) 00:15:44.092 fused_ordering(560) 00:15:44.092 fused_ordering(561) 00:15:44.092 fused_ordering(562) 00:15:44.092 fused_ordering(563) 00:15:44.092 fused_ordering(564) 00:15:44.092 fused_ordering(565) 00:15:44.092 fused_ordering(566) 00:15:44.092 fused_ordering(567) 00:15:44.092 fused_ordering(568) 00:15:44.092 fused_ordering(569) 00:15:44.092 fused_ordering(570) 00:15:44.092 fused_ordering(571) 00:15:44.092 fused_ordering(572) 00:15:44.092 fused_ordering(573) 00:15:44.092 fused_ordering(574) 00:15:44.092 fused_ordering(575) 00:15:44.092 fused_ordering(576) 00:15:44.092 fused_ordering(577) 00:15:44.092 fused_ordering(578) 00:15:44.092 fused_ordering(579) 00:15:44.092 fused_ordering(580) 00:15:44.092 fused_ordering(581) 00:15:44.092 fused_ordering(582) 00:15:44.092 fused_ordering(583) 00:15:44.092 fused_ordering(584) 00:15:44.092 fused_ordering(585) 00:15:44.092 fused_ordering(586) 00:15:44.092 fused_ordering(587) 00:15:44.092 fused_ordering(588) 00:15:44.092 fused_ordering(589) 00:15:44.092 fused_ordering(590) 00:15:44.092 fused_ordering(591) 00:15:44.092 fused_ordering(592) 00:15:44.092 fused_ordering(593) 00:15:44.092 fused_ordering(594) 00:15:44.092 fused_ordering(595) 00:15:44.092 fused_ordering(596) 00:15:44.092 fused_ordering(597) 00:15:44.092 fused_ordering(598) 00:15:44.092 fused_ordering(599) 00:15:44.092 fused_ordering(600) 00:15:44.092 fused_ordering(601) 00:15:44.092 fused_ordering(602) 00:15:44.092 fused_ordering(603) 00:15:44.092 fused_ordering(604) 00:15:44.092 fused_ordering(605) 00:15:44.092 fused_ordering(606) 00:15:44.092 fused_ordering(607) 00:15:44.092 fused_ordering(608) 00:15:44.092 fused_ordering(609) 00:15:44.092 fused_ordering(610) 00:15:44.092 fused_ordering(611) 00:15:44.092 fused_ordering(612) 00:15:44.092 fused_ordering(613) 00:15:44.092 fused_ordering(614) 00:15:44.092 fused_ordering(615) 00:15:44.353 fused_ordering(616) 00:15:44.353 fused_ordering(617) 00:15:44.353 fused_ordering(618) 00:15:44.353 fused_ordering(619) 00:15:44.353 fused_ordering(620) 00:15:44.353 fused_ordering(621) 00:15:44.353 fused_ordering(622) 00:15:44.353 fused_ordering(623) 00:15:44.353 fused_ordering(624) 00:15:44.353 fused_ordering(625) 00:15:44.353 fused_ordering(626) 00:15:44.353 fused_ordering(627) 00:15:44.353 fused_ordering(628) 00:15:44.353 fused_ordering(629) 00:15:44.353 fused_ordering(630) 00:15:44.353 fused_ordering(631) 00:15:44.353 fused_ordering(632) 00:15:44.353 fused_ordering(633) 00:15:44.353 fused_ordering(634) 00:15:44.353 fused_ordering(635) 00:15:44.353 fused_ordering(636) 00:15:44.353 fused_ordering(637) 00:15:44.353 fused_ordering(638) 00:15:44.353 fused_ordering(639) 00:15:44.353 fused_ordering(640) 00:15:44.353 fused_ordering(641) 00:15:44.353 fused_ordering(642) 00:15:44.353 fused_ordering(643) 00:15:44.353 fused_ordering(644) 00:15:44.353 fused_ordering(645) 00:15:44.353 fused_ordering(646) 00:15:44.353 fused_ordering(647) 00:15:44.353 fused_ordering(648) 00:15:44.353 fused_ordering(649) 00:15:44.353 fused_ordering(650) 00:15:44.353 fused_ordering(651) 00:15:44.353 fused_ordering(652) 00:15:44.353 fused_ordering(653) 00:15:44.353 fused_ordering(654) 00:15:44.353 fused_ordering(655) 00:15:44.353 fused_ordering(656) 00:15:44.353 fused_ordering(657) 00:15:44.353 fused_ordering(658) 00:15:44.353 fused_ordering(659) 00:15:44.353 fused_ordering(660) 00:15:44.353 fused_ordering(661) 00:15:44.353 fused_ordering(662) 00:15:44.353 fused_ordering(663) 00:15:44.353 fused_ordering(664) 00:15:44.353 fused_ordering(665) 00:15:44.353 fused_ordering(666) 00:15:44.353 fused_ordering(667) 00:15:44.353 fused_ordering(668) 00:15:44.353 fused_ordering(669) 00:15:44.353 fused_ordering(670) 00:15:44.353 fused_ordering(671) 00:15:44.353 fused_ordering(672) 00:15:44.353 fused_ordering(673) 00:15:44.353 fused_ordering(674) 00:15:44.353 fused_ordering(675) 00:15:44.353 fused_ordering(676) 00:15:44.353 fused_ordering(677) 00:15:44.353 fused_ordering(678) 00:15:44.353 fused_ordering(679) 00:15:44.353 fused_ordering(680) 00:15:44.353 fused_ordering(681) 00:15:44.353 fused_ordering(682) 00:15:44.353 fused_ordering(683) 00:15:44.353 fused_ordering(684) 00:15:44.353 fused_ordering(685) 00:15:44.353 fused_ordering(686) 00:15:44.353 fused_ordering(687) 00:15:44.353 fused_ordering(688) 00:15:44.353 fused_ordering(689) 00:15:44.353 fused_ordering(690) 00:15:44.353 fused_ordering(691) 00:15:44.353 fused_ordering(692) 00:15:44.353 fused_ordering(693) 00:15:44.353 fused_ordering(694) 00:15:44.353 fused_ordering(695) 00:15:44.353 fused_ordering(696) 00:15:44.353 fused_ordering(697) 00:15:44.353 fused_ordering(698) 00:15:44.353 fused_ordering(699) 00:15:44.353 fused_ordering(700) 00:15:44.353 fused_ordering(701) 00:15:44.353 fused_ordering(702) 00:15:44.353 fused_ordering(703) 00:15:44.353 fused_ordering(704) 00:15:44.353 fused_ordering(705) 00:15:44.353 fused_ordering(706) 00:15:44.354 fused_ordering(707) 00:15:44.354 fused_ordering(708) 00:15:44.354 fused_ordering(709) 00:15:44.354 fused_ordering(710) 00:15:44.354 fused_ordering(711) 00:15:44.354 fused_ordering(712) 00:15:44.354 fused_ordering(713) 00:15:44.354 fused_ordering(714) 00:15:44.354 fused_ordering(715) 00:15:44.354 fused_ordering(716) 00:15:44.354 fused_ordering(717) 00:15:44.354 fused_ordering(718) 00:15:44.354 fused_ordering(719) 00:15:44.354 fused_ordering(720) 00:15:44.354 fused_ordering(721) 00:15:44.354 fused_ordering(722) 00:15:44.354 fused_ordering(723) 00:15:44.354 fused_ordering(724) 00:15:44.354 fused_ordering(725) 00:15:44.354 fused_ordering(726) 00:15:44.354 fused_ordering(727) 00:15:44.354 fused_ordering(728) 00:15:44.354 fused_ordering(729) 00:15:44.354 fused_ordering(730) 00:15:44.354 fused_ordering(731) 00:15:44.354 fused_ordering(732) 00:15:44.354 fused_ordering(733) 00:15:44.354 fused_ordering(734) 00:15:44.354 fused_ordering(735) 00:15:44.354 fused_ordering(736) 00:15:44.354 fused_ordering(737) 00:15:44.354 fused_ordering(738) 00:15:44.354 fused_ordering(739) 00:15:44.354 fused_ordering(740) 00:15:44.354 fused_ordering(741) 00:15:44.354 fused_ordering(742) 00:15:44.354 fused_ordering(743) 00:15:44.354 fused_ordering(744) 00:15:44.354 fused_ordering(745) 00:15:44.354 fused_ordering(746) 00:15:44.354 fused_ordering(747) 00:15:44.354 fused_ordering(748) 00:15:44.354 fused_ordering(749) 00:15:44.354 fused_ordering(750) 00:15:44.354 fused_ordering(751) 00:15:44.354 fused_ordering(752) 00:15:44.354 fused_ordering(753) 00:15:44.354 fused_ordering(754) 00:15:44.354 fused_ordering(755) 00:15:44.354 fused_ordering(756) 00:15:44.354 fused_ordering(757) 00:15:44.354 fused_ordering(758) 00:15:44.354 fused_ordering(759) 00:15:44.354 fused_ordering(760) 00:15:44.354 fused_ordering(761) 00:15:44.354 fused_ordering(762) 00:15:44.354 fused_ordering(763) 00:15:44.354 fused_ordering(764) 00:15:44.354 fused_ordering(765) 00:15:44.354 fused_ordering(766) 00:15:44.354 fused_ordering(767) 00:15:44.354 fused_ordering(768) 00:15:44.354 fused_ordering(769) 00:15:44.354 fused_ordering(770) 00:15:44.354 fused_ordering(771) 00:15:44.354 fused_ordering(772) 00:15:44.354 fused_ordering(773) 00:15:44.354 fused_ordering(774) 00:15:44.354 fused_ordering(775) 00:15:44.354 fused_ordering(776) 00:15:44.354 fused_ordering(777) 00:15:44.354 fused_ordering(778) 00:15:44.354 fused_ordering(779) 00:15:44.354 fused_ordering(780) 00:15:44.354 fused_ordering(781) 00:15:44.354 fused_ordering(782) 00:15:44.354 fused_ordering(783) 00:15:44.354 fused_ordering(784) 00:15:44.354 fused_ordering(785) 00:15:44.354 fused_ordering(786) 00:15:44.354 fused_ordering(787) 00:15:44.354 fused_ordering(788) 00:15:44.354 fused_ordering(789) 00:15:44.354 fused_ordering(790) 00:15:44.354 fused_ordering(791) 00:15:44.354 fused_ordering(792) 00:15:44.354 fused_ordering(793) 00:15:44.354 fused_ordering(794) 00:15:44.354 fused_ordering(795) 00:15:44.354 fused_ordering(796) 00:15:44.354 fused_ordering(797) 00:15:44.354 fused_ordering(798) 00:15:44.354 fused_ordering(799) 00:15:44.354 fused_ordering(800) 00:15:44.354 fused_ordering(801) 00:15:44.354 fused_ordering(802) 00:15:44.354 fused_ordering(803) 00:15:44.354 fused_ordering(804) 00:15:44.354 fused_ordering(805) 00:15:44.354 fused_ordering(806) 00:15:44.354 fused_ordering(807) 00:15:44.354 fused_ordering(808) 00:15:44.354 fused_ordering(809) 00:15:44.354 fused_ordering(810) 00:15:44.354 fused_ordering(811) 00:15:44.354 fused_ordering(812) 00:15:44.354 fused_ordering(813) 00:15:44.354 fused_ordering(814) 00:15:44.354 fused_ordering(815) 00:15:44.354 fused_ordering(816) 00:15:44.354 fused_ordering(817) 00:15:44.354 fused_ordering(818) 00:15:44.354 fused_ordering(819) 00:15:44.354 fused_ordering(820) 00:15:44.926 fused_ordering(821) 00:15:44.926 fused_ordering(822) 00:15:44.926 fused_ordering(823) 00:15:44.926 fused_ordering(824) 00:15:44.926 fused_ordering(825) 00:15:44.926 fused_ordering(826) 00:15:44.926 fused_ordering(827) 00:15:44.926 fused_ordering(828) 00:15:44.926 fused_ordering(829) 00:15:44.926 fused_ordering(830) 00:15:44.926 fused_ordering(831) 00:15:44.926 fused_ordering(832) 00:15:44.926 fused_ordering(833) 00:15:44.926 fused_ordering(834) 00:15:44.926 fused_ordering(835) 00:15:44.926 fused_ordering(836) 00:15:44.926 fused_ordering(837) 00:15:44.926 fused_ordering(838) 00:15:44.926 fused_ordering(839) 00:15:44.926 fused_ordering(840) 00:15:44.926 fused_ordering(841) 00:15:44.926 fused_ordering(842) 00:15:44.926 fused_ordering(843) 00:15:44.926 fused_ordering(844) 00:15:44.926 fused_ordering(845) 00:15:44.926 fused_ordering(846) 00:15:44.926 fused_ordering(847) 00:15:44.926 fused_ordering(848) 00:15:44.926 fused_ordering(849) 00:15:44.926 fused_ordering(850) 00:15:44.926 fused_ordering(851) 00:15:44.926 fused_ordering(852) 00:15:44.926 fused_ordering(853) 00:15:44.926 fused_ordering(854) 00:15:44.926 fused_ordering(855) 00:15:44.926 fused_ordering(856) 00:15:44.926 fused_ordering(857) 00:15:44.926 fused_ordering(858) 00:15:44.926 fused_ordering(859) 00:15:44.926 fused_ordering(860) 00:15:44.926 fused_ordering(861) 00:15:44.926 fused_ordering(862) 00:15:44.926 fused_ordering(863) 00:15:44.926 fused_ordering(864) 00:15:44.926 fused_ordering(865) 00:15:44.927 fused_ordering(866) 00:15:44.927 fused_ordering(867) 00:15:44.927 fused_ordering(868) 00:15:44.927 fused_ordering(869) 00:15:44.927 fused_ordering(870) 00:15:44.927 fused_ordering(871) 00:15:44.927 fused_ordering(872) 00:15:44.927 fused_ordering(873) 00:15:44.927 fused_ordering(874) 00:15:44.927 fused_ordering(875) 00:15:44.927 fused_ordering(876) 00:15:44.927 fused_ordering(877) 00:15:44.927 fused_ordering(878) 00:15:44.927 fused_ordering(879) 00:15:44.927 fused_ordering(880) 00:15:44.927 fused_ordering(881) 00:15:44.927 fused_ordering(882) 00:15:44.927 fused_ordering(883) 00:15:44.927 fused_ordering(884) 00:15:44.927 fused_ordering(885) 00:15:44.927 fused_ordering(886) 00:15:44.927 fused_ordering(887) 00:15:44.927 fused_ordering(888) 00:15:44.927 fused_ordering(889) 00:15:44.927 fused_ordering(890) 00:15:44.927 fused_ordering(891) 00:15:44.927 fused_ordering(892) 00:15:44.927 fused_ordering(893) 00:15:44.927 fused_ordering(894) 00:15:44.927 fused_ordering(895) 00:15:44.927 fused_ordering(896) 00:15:44.927 fused_ordering(897) 00:15:44.927 fused_ordering(898) 00:15:44.927 fused_ordering(899) 00:15:44.927 fused_ordering(900) 00:15:44.927 fused_ordering(901) 00:15:44.927 fused_ordering(902) 00:15:44.927 fused_ordering(903) 00:15:44.927 fused_ordering(904) 00:15:44.927 fused_ordering(905) 00:15:44.927 fused_ordering(906) 00:15:44.927 fused_ordering(907) 00:15:44.927 fused_ordering(908) 00:15:44.927 fused_ordering(909) 00:15:44.927 fused_ordering(910) 00:15:44.927 fused_ordering(911) 00:15:44.927 fused_ordering(912) 00:15:44.927 fused_ordering(913) 00:15:44.927 fused_ordering(914) 00:15:44.927 fused_ordering(915) 00:15:44.927 fused_ordering(916) 00:15:44.927 fused_ordering(917) 00:15:44.927 fused_ordering(918) 00:15:44.927 fused_ordering(919) 00:15:44.927 fused_ordering(920) 00:15:44.927 fused_ordering(921) 00:15:44.927 fused_ordering(922) 00:15:44.927 fused_ordering(923) 00:15:44.927 fused_ordering(924) 00:15:44.927 fused_ordering(925) 00:15:44.927 fused_ordering(926) 00:15:44.927 fused_ordering(927) 00:15:44.927 fused_ordering(928) 00:15:44.927 fused_ordering(929) 00:15:44.927 fused_ordering(930) 00:15:44.927 fused_ordering(931) 00:15:44.927 fused_ordering(932) 00:15:44.927 fused_ordering(933) 00:15:44.927 fused_ordering(934) 00:15:44.927 fused_ordering(935) 00:15:44.927 fused_ordering(936) 00:15:44.927 fused_ordering(937) 00:15:44.927 fused_ordering(938) 00:15:44.927 fused_ordering(939) 00:15:44.927 fused_ordering(940) 00:15:44.927 fused_ordering(941) 00:15:44.927 fused_ordering(942) 00:15:44.927 fused_ordering(943) 00:15:44.927 fused_ordering(944) 00:15:44.927 fused_ordering(945) 00:15:44.927 fused_ordering(946) 00:15:44.927 fused_ordering(947) 00:15:44.927 fused_ordering(948) 00:15:44.927 fused_ordering(949) 00:15:44.927 fused_ordering(950) 00:15:44.927 fused_ordering(951) 00:15:44.927 fused_ordering(952) 00:15:44.927 fused_ordering(953) 00:15:44.927 fused_ordering(954) 00:15:44.927 fused_ordering(955) 00:15:44.927 fused_ordering(956) 00:15:44.927 fused_ordering(957) 00:15:44.927 fused_ordering(958) 00:15:44.927 fused_ordering(959) 00:15:44.927 fused_ordering(960) 00:15:44.927 fused_ordering(961) 00:15:44.927 fused_ordering(962) 00:15:44.927 fused_ordering(963) 00:15:44.927 fused_ordering(964) 00:15:44.927 fused_ordering(965) 00:15:44.927 fused_ordering(966) 00:15:44.927 fused_ordering(967) 00:15:44.927 fused_ordering(968) 00:15:44.927 fused_ordering(969) 00:15:44.927 fused_ordering(970) 00:15:44.927 fused_ordering(971) 00:15:44.927 fused_ordering(972) 00:15:44.927 fused_ordering(973) 00:15:44.927 fused_ordering(974) 00:15:44.927 fused_ordering(975) 00:15:44.927 fused_ordering(976) 00:15:44.927 fused_ordering(977) 00:15:44.927 fused_ordering(978) 00:15:44.927 fused_ordering(979) 00:15:44.927 fused_ordering(980) 00:15:44.927 fused_ordering(981) 00:15:44.927 fused_ordering(982) 00:15:44.927 fused_ordering(983) 00:15:44.927 fused_ordering(984) 00:15:44.927 fused_ordering(985) 00:15:44.927 fused_ordering(986) 00:15:44.927 fused_ordering(987) 00:15:44.927 fused_ordering(988) 00:15:44.927 fused_ordering(989) 00:15:44.927 fused_ordering(990) 00:15:44.927 fused_ordering(991) 00:15:44.927 fused_ordering(992) 00:15:44.927 fused_ordering(993) 00:15:44.927 fused_ordering(994) 00:15:44.927 fused_ordering(995) 00:15:44.927 fused_ordering(996) 00:15:44.927 fused_ordering(997) 00:15:44.927 fused_ordering(998) 00:15:44.927 fused_ordering(999) 00:15:44.927 fused_ordering(1000) 00:15:44.927 fused_ordering(1001) 00:15:44.927 fused_ordering(1002) 00:15:44.927 fused_ordering(1003) 00:15:44.927 fused_ordering(1004) 00:15:44.927 fused_ordering(1005) 00:15:44.927 fused_ordering(1006) 00:15:44.927 fused_ordering(1007) 00:15:44.927 fused_ordering(1008) 00:15:44.927 fused_ordering(1009) 00:15:44.927 fused_ordering(1010) 00:15:44.927 fused_ordering(1011) 00:15:44.927 fused_ordering(1012) 00:15:44.927 fused_ordering(1013) 00:15:44.927 fused_ordering(1014) 00:15:44.927 fused_ordering(1015) 00:15:44.927 fused_ordering(1016) 00:15:44.927 fused_ordering(1017) 00:15:44.927 fused_ordering(1018) 00:15:44.927 fused_ordering(1019) 00:15:44.927 fused_ordering(1020) 00:15:44.927 fused_ordering(1021) 00:15:44.927 fused_ordering(1022) 00:15:44.927 fused_ordering(1023) 00:15:44.927 17:52:49 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:44.927 17:52:49 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:44.927 17:52:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:44.927 17:52:49 -- nvmf/common.sh@116 -- # sync 00:15:44.927 17:52:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:44.927 17:52:49 -- nvmf/common.sh@119 -- # set +e 00:15:44.927 17:52:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:44.927 17:52:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:44.927 rmmod nvme_tcp 00:15:44.927 rmmod nvme_fabrics 00:15:44.927 rmmod nvme_keyring 00:15:44.927 17:52:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:44.927 17:52:49 -- nvmf/common.sh@123 -- # set -e 00:15:44.927 17:52:49 -- nvmf/common.sh@124 -- # return 0 00:15:44.927 17:52:49 -- nvmf/common.sh@477 -- # '[' -n 1621466 ']' 00:15:44.927 17:52:49 -- nvmf/common.sh@478 -- # killprocess 1621466 00:15:44.927 17:52:49 -- common/autotest_common.sh@926 -- # '[' -z 1621466 ']' 00:15:44.927 17:52:49 -- common/autotest_common.sh@930 -- # kill -0 1621466 00:15:44.927 17:52:49 -- common/autotest_common.sh@931 -- # uname 00:15:44.927 17:52:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.927 17:52:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1621466 00:15:44.927 17:52:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:44.927 17:52:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:44.927 17:52:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1621466' 00:15:44.927 killing process with pid 1621466 00:15:44.927 17:52:49 -- common/autotest_common.sh@945 -- # kill 1621466 00:15:44.927 17:52:49 -- common/autotest_common.sh@950 -- # wait 1621466 00:15:45.188 17:52:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.188 17:52:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.188 17:52:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.188 17:52:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.188 17:52:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.188 17:52:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.188 17:52:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.188 17:52:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.102 17:52:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:47.102 00:15:47.102 real 0m13.336s 00:15:47.102 user 0m6.639s 00:15:47.102 sys 0m7.072s 00:15:47.102 17:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.102 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:47.102 ************************************ 00:15:47.102 END TEST nvmf_fused_ordering 00:15:47.102 ************************************ 00:15:47.364 17:52:51 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:47.364 17:52:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:47.364 17:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:47.364 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:47.364 ************************************ 00:15:47.364 START TEST nvmf_delete_subsystem 00:15:47.364 ************************************ 00:15:47.364 17:52:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:47.364 * Looking for test storage... 00:15:47.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.364 17:52:51 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.364 17:52:51 -- nvmf/common.sh@7 -- # uname -s 00:15:47.364 17:52:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.364 17:52:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.364 17:52:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.364 17:52:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.364 17:52:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.364 17:52:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.364 17:52:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.364 17:52:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.364 17:52:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.364 17:52:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.364 17:52:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:47.364 17:52:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:15:47.364 17:52:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.364 17:52:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.364 17:52:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.364 17:52:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.364 17:52:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.364 17:52:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.364 17:52:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.364 17:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.364 17:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.364 17:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.364 17:52:51 -- paths/export.sh@5 -- # export PATH 00:15:47.364 17:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.364 17:52:51 -- nvmf/common.sh@46 -- # : 0 00:15:47.364 17:52:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:47.364 17:52:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:47.364 17:52:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:47.364 17:52:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.364 17:52:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.364 17:52:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:47.364 17:52:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:47.364 17:52:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:47.364 17:52:51 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:47.364 17:52:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:47.364 17:52:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.364 17:52:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:47.364 17:52:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:47.364 17:52:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:47.364 17:52:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.364 17:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.364 17:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.364 17:52:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:47.364 17:52:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:47.364 17:52:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:47.364 17:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:55.509 17:52:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:55.509 17:52:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:55.509 17:52:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:55.509 17:52:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:55.509 17:52:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:55.509 17:52:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:55.509 17:52:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:55.509 17:52:59 -- nvmf/common.sh@294 -- # net_devs=() 00:15:55.509 17:52:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:55.509 17:52:59 -- nvmf/common.sh@295 -- # e810=() 00:15:55.509 17:52:59 -- nvmf/common.sh@295 -- # local -ga e810 00:15:55.509 17:52:59 -- nvmf/common.sh@296 -- # x722=() 00:15:55.509 17:52:59 -- nvmf/common.sh@296 -- # local -ga x722 00:15:55.509 17:52:59 -- nvmf/common.sh@297 -- # mlx=() 00:15:55.509 17:52:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:55.509 17:52:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.509 17:52:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:55.509 17:52:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:55.509 17:52:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:55.509 17:52:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:55.509 17:52:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:55.509 17:52:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:55.509 17:52:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:55.509 17:52:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:55.509 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:55.509 17:52:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:55.509 17:52:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:55.509 17:52:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.509 17:52:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:55.510 17:52:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:55.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:55.510 17:52:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:55.510 17:52:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:55.510 17:52:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.510 17:52:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:55.510 17:52:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.510 17:52:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:55.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:55.510 17:52:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.510 17:52:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:55.510 17:52:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.510 17:52:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:55.510 17:52:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.510 17:52:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:55.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:55.510 17:52:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.510 17:52:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:55.510 17:52:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:55.510 17:52:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:55.510 17:52:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:55.510 17:52:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.510 17:52:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.510 17:52:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.510 17:52:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:55.510 17:52:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.510 17:52:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.510 17:52:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:55.510 17:52:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.510 17:52:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.510 17:52:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:55.510 17:52:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:55.510 17:52:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.510 17:52:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.510 17:52:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.510 17:52:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.510 17:52:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:55.510 17:52:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.510 17:52:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.829 17:52:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.830 17:52:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:55.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:15:55.830 00:15:55.830 --- 10.0.0.2 ping statistics --- 00:15:55.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.830 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:15:55.830 17:52:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:15:55.830 00:15:55.830 --- 10.0.0.1 ping statistics --- 00:15:55.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.830 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:15:55.830 17:52:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.830 17:52:59 -- nvmf/common.sh@410 -- # return 0 00:15:55.830 17:52:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:55.830 17:52:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.830 17:52:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:55.830 17:52:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:55.830 17:52:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.830 17:52:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:55.830 17:52:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:55.830 17:52:59 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:55.830 17:52:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:55.830 17:52:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:55.830 17:52:59 -- common/autotest_common.sh@10 -- # set +x 00:15:55.830 17:52:59 -- nvmf/common.sh@469 -- # nvmfpid=1626311 00:15:55.830 17:52:59 -- nvmf/common.sh@470 -- # waitforlisten 1626311 00:15:55.830 17:52:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:55.830 17:52:59 -- common/autotest_common.sh@819 -- # '[' -z 1626311 ']' 00:15:55.830 17:52:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.830 17:52:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:55.830 17:52:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.830 17:52:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:55.830 17:52:59 -- common/autotest_common.sh@10 -- # set +x 00:15:55.830 [2024-07-22 17:52:59.899906] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:55.830 [2024-07-22 17:52:59.899967] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.830 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.830 [2024-07-22 17:52:59.992547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:55.830 [2024-07-22 17:53:00.089625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:55.830 [2024-07-22 17:53:00.089781] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.830 [2024-07-22 17:53:00.089791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.830 [2024-07-22 17:53:00.089798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.830 [2024-07-22 17:53:00.089935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.830 [2024-07-22 17:53:00.089940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.790 17:53:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:56.790 17:53:00 -- common/autotest_common.sh@852 -- # return 0 00:15:56.790 17:53:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:56.790 17:53:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:56.790 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 17:53:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.791 17:53:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.791 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 [2024-07-22 17:53:00.781473] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.791 17:53:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:56.791 17:53:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.791 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 17:53:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.791 17:53:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.791 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 [2024-07-22 17:53:00.805660] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.791 17:53:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:56.791 17:53:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.791 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 NULL1 00:15:56.791 17:53:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:56.791 17:53:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.791 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 Delay0 00:15:56.791 17:53:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:56.791 17:53:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.791 17:53:00 -- common/autotest_common.sh@10 -- # set +x 00:15:56.791 17:53:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@28 -- # perf_pid=1626571 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:56.791 17:53:00 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:56.791 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.791 [2024-07-22 17:53:00.902291] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:58.703 17:53:02 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.703 17:53:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.703 17:53:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 [2024-07-22 17:53:02.939835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c8f0 is same with the state(5) to be set 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.703 Write completed with error (sct=0, sc=8) 00:15:58.703 Read completed with error (sct=0, sc=8) 00:15:58.703 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:58.704 Read completed with error (sct=0, sc=8) 00:15:58.704 starting I/O failed: -6 00:15:58.704 Write completed with error (sct=0, sc=8) 00:15:59.644 [2024-07-22 17:53:03.919784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x603910 is same with the state(5) to be set 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 [2024-07-22 17:53:03.942891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f281400bf20 is same with the state(5) to be set 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 [2024-07-22 17:53:03.943061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f281400c480 is same with the state(5) to be set 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 [2024-07-22 17:53:03.943744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60c640 is same with the state(5) to be set 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 Write completed with error (sct=0, sc=8) 00:15:59.905 Read completed with error (sct=0, sc=8) 00:15:59.905 [2024-07-22 17:53:03.943862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x60cba0 is same with the state(5) to be set 00:15:59.905 [2024-07-22 17:53:03.944407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x603910 (9): Bad file descriptor 00:15:59.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:59.905 17:53:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:59.905 17:53:03 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:59.905 Initializing NVMe Controllers 00:15:59.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.905 Controller IO queue size 128, less than required. 00:15:59.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:59.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:59.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:59.905 Initialization complete. Launching workers. 00:15:59.905 ======================================================== 00:15:59.905 Latency(us) 00:15:59.905 Device Information : IOPS MiB/s Average min max 00:15:59.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.51 0.09 884752.45 233.93 1010180.39 00:15:59.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.52 0.08 955197.59 341.71 2000731.94 00:15:59.905 ======================================================== 00:15:59.905 Total : 346.03 0.17 919671.38 233.93 2000731.94 00:15:59.905 00:15:59.905 17:53:03 -- target/delete_subsystem.sh@35 -- # kill -0 1626571 00:15:59.905 17:53:03 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@35 -- # kill -0 1626571 00:16:00.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1626571) - No such process 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@45 -- # NOT wait 1626571 00:16:00.477 17:53:04 -- common/autotest_common.sh@640 -- # local es=0 00:16:00.477 17:53:04 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1626571 00:16:00.477 17:53:04 -- common/autotest_common.sh@628 -- # local arg=wait 00:16:00.477 17:53:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:00.477 17:53:04 -- common/autotest_common.sh@632 -- # type -t wait 00:16:00.477 17:53:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:00.477 17:53:04 -- common/autotest_common.sh@643 -- # wait 1626571 00:16:00.477 17:53:04 -- common/autotest_common.sh@643 -- # es=1 00:16:00.477 17:53:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:00.477 17:53:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:00.477 17:53:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:00.477 17:53:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.477 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.477 17:53:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.477 17:53:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.477 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.477 [2024-07-22 17:53:04.476182] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.477 17:53:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.477 17:53:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:00.477 17:53:04 -- common/autotest_common.sh@10 -- # set +x 00:16:00.477 17:53:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@54 -- # perf_pid=1627195 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@56 -- # delay=0 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:00.477 17:53:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:00.477 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.477 [2024-07-22 17:53:04.542920] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:00.738 17:53:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:00.738 17:53:04 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:00.738 17:53:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:01.309 17:53:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:01.309 17:53:05 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:01.309 17:53:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:01.881 17:53:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:01.881 17:53:06 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:01.881 17:53:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:02.452 17:53:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:02.452 17:53:06 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:02.452 17:53:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:03.024 17:53:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:03.024 17:53:07 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:03.024 17:53:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:03.284 17:53:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:03.285 17:53:07 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:03.285 17:53:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:16:03.545 Initializing NVMe Controllers 00:16:03.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:03.545 Controller IO queue size 128, less than required. 00:16:03.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:03.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:16:03.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:16:03.545 Initialization complete. Launching workers. 00:16:03.545 ======================================================== 00:16:03.545 Latency(us) 00:16:03.545 Device Information : IOPS MiB/s Average min max 00:16:03.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002669.62 1000230.18 1041547.86 00:16:03.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004509.89 1000222.08 1011427.66 00:16:03.545 ======================================================== 00:16:03.545 Total : 256.00 0.12 1003589.76 1000222.08 1041547.86 00:16:03.545 00:16:03.805 17:53:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:16:03.805 17:53:08 -- target/delete_subsystem.sh@57 -- # kill -0 1627195 00:16:03.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1627195) - No such process 00:16:03.805 17:53:08 -- target/delete_subsystem.sh@67 -- # wait 1627195 00:16:03.805 17:53:08 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:03.805 17:53:08 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:16:03.805 17:53:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:03.805 17:53:08 -- nvmf/common.sh@116 -- # sync 00:16:03.805 17:53:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:03.805 17:53:08 -- nvmf/common.sh@119 -- # set +e 00:16:03.805 17:53:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:03.805 17:53:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:03.805 rmmod nvme_tcp 00:16:03.805 rmmod nvme_fabrics 00:16:04.066 rmmod nvme_keyring 00:16:04.066 17:53:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:04.066 17:53:08 -- nvmf/common.sh@123 -- # set -e 00:16:04.066 17:53:08 -- nvmf/common.sh@124 -- # return 0 00:16:04.066 17:53:08 -- nvmf/common.sh@477 -- # '[' -n 1626311 ']' 00:16:04.066 17:53:08 -- nvmf/common.sh@478 -- # killprocess 1626311 00:16:04.066 17:53:08 -- common/autotest_common.sh@926 -- # '[' -z 1626311 ']' 00:16:04.066 17:53:08 -- common/autotest_common.sh@930 -- # kill -0 1626311 00:16:04.066 17:53:08 -- common/autotest_common.sh@931 -- # uname 00:16:04.066 17:53:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.066 17:53:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1626311 00:16:04.066 17:53:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.066 17:53:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.066 17:53:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1626311' 00:16:04.066 killing process with pid 1626311 00:16:04.066 17:53:08 -- common/autotest_common.sh@945 -- # kill 1626311 00:16:04.066 17:53:08 -- common/autotest_common.sh@950 -- # wait 1626311 00:16:04.066 17:53:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:04.066 17:53:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:04.066 17:53:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:04.066 17:53:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.066 17:53:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:04.066 17:53:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.066 17:53:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.066 17:53:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.614 17:53:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:06.614 00:16:06.614 real 0m18.948s 00:16:06.614 user 0m30.932s 00:16:06.614 sys 0m6.982s 00:16:06.614 17:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.614 17:53:10 -- common/autotest_common.sh@10 -- # set +x 00:16:06.614 ************************************ 00:16:06.614 END TEST nvmf_delete_subsystem 00:16:06.614 ************************************ 00:16:06.614 17:53:10 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:16:06.614 17:53:10 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:06.614 17:53:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:06.614 17:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.614 17:53:10 -- common/autotest_common.sh@10 -- # set +x 00:16:06.614 ************************************ 00:16:06.614 START TEST nvmf_nvme_cli 00:16:06.614 ************************************ 00:16:06.614 17:53:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:06.614 * Looking for test storage... 00:16:06.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.614 17:53:10 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.614 17:53:10 -- nvmf/common.sh@7 -- # uname -s 00:16:06.614 17:53:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.614 17:53:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.614 17:53:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.614 17:53:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.614 17:53:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.614 17:53:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.614 17:53:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.614 17:53:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.614 17:53:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.614 17:53:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.614 17:53:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:06.614 17:53:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:06.614 17:53:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.614 17:53:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.614 17:53:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.615 17:53:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.615 17:53:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.615 17:53:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.615 17:53:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.615 17:53:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.615 17:53:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.615 17:53:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.615 17:53:10 -- paths/export.sh@5 -- # export PATH 00:16:06.615 17:53:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.615 17:53:10 -- nvmf/common.sh@46 -- # : 0 00:16:06.615 17:53:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:06.615 17:53:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:06.615 17:53:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:06.615 17:53:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.615 17:53:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.615 17:53:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:06.615 17:53:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:06.615 17:53:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:06.615 17:53:10 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.615 17:53:10 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.615 17:53:10 -- target/nvme_cli.sh@14 -- # devs=() 00:16:06.615 17:53:10 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:06.615 17:53:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:06.615 17:53:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.615 17:53:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:06.615 17:53:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:06.615 17:53:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:06.615 17:53:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.615 17:53:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.615 17:53:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.615 17:53:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:06.615 17:53:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:06.615 17:53:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:06.615 17:53:10 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 17:53:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:14.787 17:53:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:14.787 17:53:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:14.787 17:53:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:14.787 17:53:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:14.787 17:53:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:14.787 17:53:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:14.787 17:53:18 -- nvmf/common.sh@294 -- # net_devs=() 00:16:14.787 17:53:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:14.787 17:53:18 -- nvmf/common.sh@295 -- # e810=() 00:16:14.787 17:53:18 -- nvmf/common.sh@295 -- # local -ga e810 00:16:14.787 17:53:18 -- nvmf/common.sh@296 -- # x722=() 00:16:14.787 17:53:18 -- nvmf/common.sh@296 -- # local -ga x722 00:16:14.787 17:53:18 -- nvmf/common.sh@297 -- # mlx=() 00:16:14.787 17:53:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:14.787 17:53:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:14.787 17:53:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:14.787 17:53:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:14.787 17:53:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:14.787 17:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:14.787 17:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:14.787 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:14.787 17:53:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:14.787 17:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:14.787 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:14.787 17:53:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:14.787 17:53:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:14.787 17:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.787 17:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:14.787 17:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.787 17:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:14.787 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:14.787 17:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.787 17:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:14.787 17:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:14.787 17:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:14.787 17:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:14.787 17:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:14.787 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:14.787 17:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:14.787 17:53:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:14.787 17:53:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:14.787 17:53:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:14.787 17:53:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.787 17:53:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.787 17:53:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:14.787 17:53:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:14.787 17:53:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:14.787 17:53:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:14.787 17:53:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:14.787 17:53:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:14.787 17:53:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.787 17:53:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:14.787 17:53:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:14.787 17:53:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:14.787 17:53:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:14.787 17:53:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:14.787 17:53:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:14.787 17:53:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:14.787 17:53:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:14.787 17:53:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:14.787 17:53:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:14.787 17:53:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:14.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:16:14.787 00:16:14.787 --- 10.0.0.2 ping statistics --- 00:16:14.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.787 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:16:14.787 17:53:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:14.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:16:14.787 00:16:14.787 --- 10.0.0.1 ping statistics --- 00:16:14.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.787 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:16:14.787 17:53:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.787 17:53:18 -- nvmf/common.sh@410 -- # return 0 00:16:14.787 17:53:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.787 17:53:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.787 17:53:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.787 17:53:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.787 17:53:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.787 17:53:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.787 17:53:18 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:14.787 17:53:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.787 17:53:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:14.787 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 17:53:18 -- nvmf/common.sh@469 -- # nvmfpid=1632105 00:16:14.787 17:53:18 -- nvmf/common.sh@470 -- # waitforlisten 1632105 00:16:14.788 17:53:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:14.788 17:53:18 -- common/autotest_common.sh@819 -- # '[' -z 1632105 ']' 00:16:14.788 17:53:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.788 17:53:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:14.788 17:53:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.788 17:53:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:14.788 17:53:18 -- common/autotest_common.sh@10 -- # set +x 00:16:14.788 [2024-07-22 17:53:18.526498] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:14.788 [2024-07-22 17:53:18.526580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.788 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.788 [2024-07-22 17:53:18.617984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.788 [2024-07-22 17:53:18.710667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.788 [2024-07-22 17:53:18.710831] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.788 [2024-07-22 17:53:18.710840] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.788 [2024-07-22 17:53:18.710847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.788 [2024-07-22 17:53:18.710996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.788 [2024-07-22 17:53:18.711121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.788 [2024-07-22 17:53:18.711252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.788 [2024-07-22 17:53:18.711255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.358 17:53:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:15.358 17:53:19 -- common/autotest_common.sh@852 -- # return 0 00:16:15.358 17:53:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:15.358 17:53:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 17:53:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.358 17:53:19 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 [2024-07-22 17:53:19.427556] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 Malloc0 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 Malloc1 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 [2024-07-22 17:53:19.513902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.358 17:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:15.358 17:53:19 -- common/autotest_common.sh@10 -- # set +x 00:16:15.358 17:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:15.358 17:53:19 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -a 10.0.0.2 -s 4420 00:16:15.358 00:16:15.358 Discovery Log Number of Records 2, Generation counter 2 00:16:15.358 =====Discovery Log Entry 0====== 00:16:15.358 trtype: tcp 00:16:15.358 adrfam: ipv4 00:16:15.358 subtype: current discovery subsystem 00:16:15.358 treq: not required 00:16:15.358 portid: 0 00:16:15.358 trsvcid: 4420 00:16:15.358 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:15.358 traddr: 10.0.0.2 00:16:15.358 eflags: explicit discovery connections, duplicate discovery information 00:16:15.358 sectype: none 00:16:15.358 =====Discovery Log Entry 1====== 00:16:15.358 trtype: tcp 00:16:15.358 adrfam: ipv4 00:16:15.358 subtype: nvme subsystem 00:16:15.358 treq: not required 00:16:15.358 portid: 0 00:16:15.358 trsvcid: 4420 00:16:15.358 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:15.358 traddr: 10.0.0.2 00:16:15.358 eflags: none 00:16:15.358 sectype: none 00:16:15.358 17:53:19 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:15.358 17:53:19 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:15.358 17:53:19 -- nvmf/common.sh@510 -- # local dev _ 00:16:15.358 17:53:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.358 17:53:19 -- nvmf/common.sh@509 -- # nvme list 00:16:15.358 17:53:19 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:15.358 17:53:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.358 17:53:19 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:15.358 17:53:19 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:15.358 17:53:19 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:15.358 17:53:19 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:17.270 17:53:21 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:17.270 17:53:21 -- common/autotest_common.sh@1177 -- # local i=0 00:16:17.270 17:53:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.270 17:53:21 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:16:17.270 17:53:21 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:16:17.270 17:53:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:19.183 17:53:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:19.183 17:53:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:19.183 17:53:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:19.183 17:53:23 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:16:19.183 17:53:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.183 17:53:23 -- common/autotest_common.sh@1187 -- # return 0 00:16:19.183 17:53:23 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:19.183 17:53:23 -- nvmf/common.sh@510 -- # local dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@509 -- # nvme list 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:19.183 /dev/nvme0n1 ]] 00:16:19.183 17:53:23 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:19.183 17:53:23 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:19.183 17:53:23 -- nvmf/common.sh@510 -- # local dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@509 -- # nvme list 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:19.183 17:53:23 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:16:19.183 17:53:23 -- nvmf/common.sh@512 -- # read -r dev _ 00:16:19.183 17:53:23 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:19.183 17:53:23 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:19.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.183 17:53:23 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:19.183 17:53:23 -- common/autotest_common.sh@1198 -- # local i=0 00:16:19.183 17:53:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:19.183 17:53:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.183 17:53:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:19.183 17:53:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:19.183 17:53:23 -- common/autotest_common.sh@1210 -- # return 0 00:16:19.183 17:53:23 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:19.183 17:53:23 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.183 17:53:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.183 17:53:23 -- common/autotest_common.sh@10 -- # set +x 00:16:19.183 17:53:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.183 17:53:23 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:19.183 17:53:23 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:19.183 17:53:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:19.183 17:53:23 -- nvmf/common.sh@116 -- # sync 00:16:19.183 17:53:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:19.183 17:53:23 -- nvmf/common.sh@119 -- # set +e 00:16:19.183 17:53:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:19.183 17:53:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:19.183 rmmod nvme_tcp 00:16:19.183 rmmod nvme_fabrics 00:16:19.183 rmmod nvme_keyring 00:16:19.183 17:53:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:19.183 17:53:23 -- nvmf/common.sh@123 -- # set -e 00:16:19.183 17:53:23 -- nvmf/common.sh@124 -- # return 0 00:16:19.183 17:53:23 -- nvmf/common.sh@477 -- # '[' -n 1632105 ']' 00:16:19.183 17:53:23 -- nvmf/common.sh@478 -- # killprocess 1632105 00:16:19.183 17:53:23 -- common/autotest_common.sh@926 -- # '[' -z 1632105 ']' 00:16:19.183 17:53:23 -- common/autotest_common.sh@930 -- # kill -0 1632105 00:16:19.183 17:53:23 -- common/autotest_common.sh@931 -- # uname 00:16:19.183 17:53:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:19.183 17:53:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1632105 00:16:19.183 17:53:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:19.183 17:53:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:19.183 17:53:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1632105' 00:16:19.183 killing process with pid 1632105 00:16:19.183 17:53:23 -- common/autotest_common.sh@945 -- # kill 1632105 00:16:19.183 17:53:23 -- common/autotest_common.sh@950 -- # wait 1632105 00:16:19.444 17:53:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:19.444 17:53:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:19.444 17:53:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:19.444 17:53:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.444 17:53:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:19.444 17:53:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.444 17:53:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.444 17:53:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.991 17:53:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:21.991 00:16:21.991 real 0m15.238s 00:16:21.991 user 0m21.889s 00:16:21.991 sys 0m6.349s 00:16:21.991 17:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.991 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:16:21.991 ************************************ 00:16:21.991 END TEST nvmf_nvme_cli 00:16:21.991 ************************************ 00:16:21.991 17:53:25 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:16:21.991 17:53:25 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:21.991 17:53:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:21.991 17:53:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.991 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:16:21.991 ************************************ 00:16:21.991 START TEST nvmf_host_management 00:16:21.991 ************************************ 00:16:21.991 17:53:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:21.991 * Looking for test storage... 00:16:21.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.991 17:53:25 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.991 17:53:25 -- nvmf/common.sh@7 -- # uname -s 00:16:21.991 17:53:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.991 17:53:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.991 17:53:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.991 17:53:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.991 17:53:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.991 17:53:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.991 17:53:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.991 17:53:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.991 17:53:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.991 17:53:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.991 17:53:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:21.991 17:53:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:21.991 17:53:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.991 17:53:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.991 17:53:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.991 17:53:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.991 17:53:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.991 17:53:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.991 17:53:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.991 17:53:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.991 17:53:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.991 17:53:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.991 17:53:25 -- paths/export.sh@5 -- # export PATH 00:16:21.991 17:53:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.991 17:53:25 -- nvmf/common.sh@46 -- # : 0 00:16:21.991 17:53:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.992 17:53:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.992 17:53:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.992 17:53:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.992 17:53:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.992 17:53:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.992 17:53:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.992 17:53:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.992 17:53:25 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.992 17:53:25 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.992 17:53:25 -- target/host_management.sh@104 -- # nvmftestinit 00:16:21.992 17:53:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.992 17:53:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.992 17:53:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.992 17:53:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.992 17:53:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.992 17:53:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.992 17:53:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.992 17:53:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.992 17:53:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:21.992 17:53:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:21.992 17:53:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:21.992 17:53:25 -- common/autotest_common.sh@10 -- # set +x 00:16:30.127 17:53:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:30.127 17:53:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:30.127 17:53:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:30.127 17:53:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:30.127 17:53:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:30.127 17:53:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:30.127 17:53:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:30.127 17:53:33 -- nvmf/common.sh@294 -- # net_devs=() 00:16:30.127 17:53:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:30.127 17:53:33 -- nvmf/common.sh@295 -- # e810=() 00:16:30.127 17:53:33 -- nvmf/common.sh@295 -- # local -ga e810 00:16:30.127 17:53:33 -- nvmf/common.sh@296 -- # x722=() 00:16:30.128 17:53:33 -- nvmf/common.sh@296 -- # local -ga x722 00:16:30.128 17:53:33 -- nvmf/common.sh@297 -- # mlx=() 00:16:30.128 17:53:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:30.128 17:53:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.128 17:53:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:30.128 17:53:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:30.128 17:53:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:30.128 17:53:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:30.128 17:53:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:30.128 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:30.128 17:53:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:30.128 17:53:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:30.128 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:30.128 17:53:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:30.128 17:53:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:30.128 17:53:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.128 17:53:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:30.128 17:53:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.128 17:53:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:30.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:30.128 17:53:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.128 17:53:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:30.128 17:53:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.128 17:53:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:30.128 17:53:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.128 17:53:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:30.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:30.128 17:53:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.128 17:53:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:30.128 17:53:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:30.128 17:53:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:30.128 17:53:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.128 17:53:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.128 17:53:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.128 17:53:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:30.128 17:53:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.128 17:53:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.128 17:53:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:30.128 17:53:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.128 17:53:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.128 17:53:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:30.128 17:53:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:30.128 17:53:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.128 17:53:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.128 17:53:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.128 17:53:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.128 17:53:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:30.128 17:53:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.128 17:53:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.128 17:53:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.128 17:53:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:30.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.766 ms 00:16:30.128 00:16:30.128 --- 10.0.0.2 ping statistics --- 00:16:30.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.128 rtt min/avg/max/mdev = 0.766/0.766/0.766/0.000 ms 00:16:30.128 17:53:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:16:30.128 00:16:30.128 --- 10.0.0.1 ping statistics --- 00:16:30.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.128 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:16:30.128 17:53:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.128 17:53:33 -- nvmf/common.sh@410 -- # return 0 00:16:30.128 17:53:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:30.128 17:53:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.128 17:53:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:30.128 17:53:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.128 17:53:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:30.128 17:53:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:30.128 17:53:33 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:30.128 17:53:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:30.128 17:53:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.129 17:53:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 ************************************ 00:16:30.129 START TEST nvmf_host_management 00:16:30.129 ************************************ 00:16:30.129 17:53:33 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:30.129 17:53:33 -- target/host_management.sh@69 -- # starttarget 00:16:30.129 17:53:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:30.129 17:53:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:30.129 17:53:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:30.129 17:53:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 17:53:33 -- nvmf/common.sh@469 -- # nvmfpid=1637281 00:16:30.129 17:53:33 -- nvmf/common.sh@470 -- # waitforlisten 1637281 00:16:30.129 17:53:33 -- common/autotest_common.sh@819 -- # '[' -z 1637281 ']' 00:16:30.129 17:53:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:30.129 17:53:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.129 17:53:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.129 17:53:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.129 17:53:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.129 17:53:33 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 [2024-07-22 17:53:33.442801] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:30.129 [2024-07-22 17:53:33.442858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.129 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.129 [2024-07-22 17:53:33.516440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.129 [2024-07-22 17:53:33.579296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.129 [2024-07-22 17:53:33.579435] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.129 [2024-07-22 17:53:33.579444] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.129 [2024-07-22 17:53:33.579453] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.129 [2024-07-22 17:53:33.579686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.129 [2024-07-22 17:53:33.579821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.129 [2024-07-22 17:53:33.579977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.129 [2024-07-22 17:53:33.579978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:30.129 17:53:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:30.129 17:53:34 -- common/autotest_common.sh@852 -- # return 0 00:16:30.129 17:53:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:30.129 17:53:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:30.129 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 17:53:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.129 17:53:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:30.129 17:53:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.129 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 [2024-07-22 17:53:34.329795] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.129 17:53:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.129 17:53:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:30.129 17:53:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:30.129 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 17:53:34 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:30.129 17:53:34 -- target/host_management.sh@23 -- # cat 00:16:30.129 17:53:34 -- target/host_management.sh@30 -- # rpc_cmd 00:16:30.129 17:53:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.129 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:30.129 Malloc0 00:16:30.129 [2024-07-22 17:53:34.392700] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.389 17:53:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.389 17:53:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:30.389 17:53:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:30.389 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:30.389 17:53:34 -- target/host_management.sh@73 -- # perfpid=1637625 00:16:30.389 17:53:34 -- target/host_management.sh@74 -- # waitforlisten 1637625 /var/tmp/bdevperf.sock 00:16:30.389 17:53:34 -- common/autotest_common.sh@819 -- # '[' -z 1637625 ']' 00:16:30.389 17:53:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.389 17:53:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.389 17:53:34 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:30.389 17:53:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.389 17:53:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:30.389 17:53:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.389 17:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:30.389 17:53:34 -- nvmf/common.sh@520 -- # config=() 00:16:30.389 17:53:34 -- nvmf/common.sh@520 -- # local subsystem config 00:16:30.389 17:53:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:30.389 17:53:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:30.389 { 00:16:30.389 "params": { 00:16:30.389 "name": "Nvme$subsystem", 00:16:30.389 "trtype": "$TEST_TRANSPORT", 00:16:30.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:30.389 "adrfam": "ipv4", 00:16:30.389 "trsvcid": "$NVMF_PORT", 00:16:30.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:30.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:30.389 "hdgst": ${hdgst:-false}, 00:16:30.389 "ddgst": ${ddgst:-false} 00:16:30.389 }, 00:16:30.389 "method": "bdev_nvme_attach_controller" 00:16:30.389 } 00:16:30.389 EOF 00:16:30.389 )") 00:16:30.389 17:53:34 -- nvmf/common.sh@542 -- # cat 00:16:30.389 17:53:34 -- nvmf/common.sh@544 -- # jq . 00:16:30.389 17:53:34 -- nvmf/common.sh@545 -- # IFS=, 00:16:30.389 17:53:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:30.389 "params": { 00:16:30.389 "name": "Nvme0", 00:16:30.389 "trtype": "tcp", 00:16:30.389 "traddr": "10.0.0.2", 00:16:30.389 "adrfam": "ipv4", 00:16:30.389 "trsvcid": "4420", 00:16:30.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:30.389 "hdgst": false, 00:16:30.389 "ddgst": false 00:16:30.389 }, 00:16:30.389 "method": "bdev_nvme_attach_controller" 00:16:30.389 }' 00:16:30.389 [2024-07-22 17:53:34.488566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:30.389 [2024-07-22 17:53:34.488617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637625 ] 00:16:30.389 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.389 [2024-07-22 17:53:34.569350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.389 [2024-07-22 17:53:34.629474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.649 Running I/O for 10 seconds... 00:16:31.218 17:53:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.218 17:53:35 -- common/autotest_common.sh@852 -- # return 0 00:16:31.218 17:53:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:31.218 17:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.218 17:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:31.218 17:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.218 17:53:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.218 17:53:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:31.218 17:53:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:31.218 17:53:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:31.218 17:53:35 -- target/host_management.sh@52 -- # local ret=1 00:16:31.218 17:53:35 -- target/host_management.sh@53 -- # local i 00:16:31.218 17:53:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:31.218 17:53:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:31.218 17:53:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:31.218 17:53:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:31.218 17:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.218 17:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:31.218 17:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.218 17:53:35 -- target/host_management.sh@55 -- # read_io_count=1296 00:16:31.218 17:53:35 -- target/host_management.sh@58 -- # '[' 1296 -ge 100 ']' 00:16:31.218 17:53:35 -- target/host_management.sh@59 -- # ret=0 00:16:31.218 17:53:35 -- target/host_management.sh@60 -- # break 00:16:31.218 17:53:35 -- target/host_management.sh@64 -- # return 0 00:16:31.218 17:53:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:31.218 17:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.218 17:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:31.218 [2024-07-22 17:53:35.399968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.400391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1266390 is same with the state(5) to be set 00:16:31.218 [2024-07-22 17:53:35.401079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.218 [2024-07-22 17:53:35.401117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.218 [2024-07-22 17:53:35.401133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.219 [2024-07-22 17:53:35.401749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.219 [2024-07-22 17:53:35.401756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.401996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:31.220 [2024-07-22 17:53:35.402109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:31.220 [2024-07-22 17:53:35.402165] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x104dfe0 was disconnected and freed. reset controller. 00:16:31.220 [2024-07-22 17:53:35.403264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:31.220 17:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.220 task offset: 49152 on job bdev=Nvme0n1 fails 00:16:31.220 00:16:31.220 Latency(us) 00:16:31.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.220 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:31.220 Job: Nvme0n1 ended in about 0.63 seconds with error 00:16:31.220 Verification LBA range: start 0x0 length 0x400 00:16:31.220 Nvme0n1 : 0.63 2236.73 139.80 101.31 0.00 27058.45 1424.15 35490.26 00:16:31.220 =================================================================================================================== 00:16:31.220 Total : 2236.73 139.80 101.31 0.00 27058.45 1424.15 35490.26 00:16:31.220 [2024-07-22 17:53:35.405103] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.220 [2024-07-22 17:53:35.405125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10503b0 (9): Bad file descriptor 00:16:31.220 17:53:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:31.220 17:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.220 17:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:31.220 17:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.220 17:53:35 -- target/host_management.sh@87 -- # sleep 1 00:16:31.220 [2024-07-22 17:53:35.419624] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:32.183 17:53:36 -- target/host_management.sh@91 -- # kill -9 1637625 00:16:32.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1637625) - No such process 00:16:32.183 17:53:36 -- target/host_management.sh@91 -- # true 00:16:32.183 17:53:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:32.183 17:53:36 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:32.183 17:53:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:32.183 17:53:36 -- nvmf/common.sh@520 -- # config=() 00:16:32.183 17:53:36 -- nvmf/common.sh@520 -- # local subsystem config 00:16:32.183 17:53:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:32.183 17:53:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:32.183 { 00:16:32.183 "params": { 00:16:32.183 "name": "Nvme$subsystem", 00:16:32.183 "trtype": "$TEST_TRANSPORT", 00:16:32.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.183 "adrfam": "ipv4", 00:16:32.183 "trsvcid": "$NVMF_PORT", 00:16:32.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.183 "hdgst": ${hdgst:-false}, 00:16:32.183 "ddgst": ${ddgst:-false} 00:16:32.183 }, 00:16:32.183 "method": "bdev_nvme_attach_controller" 00:16:32.183 } 00:16:32.183 EOF 00:16:32.183 )") 00:16:32.183 17:53:36 -- nvmf/common.sh@542 -- # cat 00:16:32.183 17:53:36 -- nvmf/common.sh@544 -- # jq . 00:16:32.183 17:53:36 -- nvmf/common.sh@545 -- # IFS=, 00:16:32.183 17:53:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:32.183 "params": { 00:16:32.183 "name": "Nvme0", 00:16:32.183 "trtype": "tcp", 00:16:32.183 "traddr": "10.0.0.2", 00:16:32.184 "adrfam": "ipv4", 00:16:32.184 "trsvcid": "4420", 00:16:32.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:32.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:32.184 "hdgst": false, 00:16:32.184 "ddgst": false 00:16:32.184 }, 00:16:32.184 "method": "bdev_nvme_attach_controller" 00:16:32.184 }' 00:16:32.443 [2024-07-22 17:53:36.469733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:32.443 [2024-07-22 17:53:36.469785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637953 ] 00:16:32.443 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.443 [2024-07-22 17:53:36.549400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.443 [2024-07-22 17:53:36.608241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.702 Running I/O for 1 seconds... 00:16:34.081 00:16:34.081 Latency(us) 00:16:34.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.081 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.081 Verification LBA range: start 0x0 length 0x400 00:16:34.081 Nvme0n1 : 1.07 2281.12 142.57 0.00 0.00 26505.16 7410.61 48799.11 00:16:34.081 =================================================================================================================== 00:16:34.081 Total : 2281.12 142.57 0.00 0.00 26505.16 7410.61 48799.11 00:16:34.081 17:53:38 -- target/host_management.sh@101 -- # stoptarget 00:16:34.081 17:53:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:34.081 17:53:38 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:34.081 17:53:38 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.081 17:53:38 -- target/host_management.sh@40 -- # nvmftestfini 00:16:34.081 17:53:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.081 17:53:38 -- nvmf/common.sh@116 -- # sync 00:16:34.081 17:53:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:34.081 17:53:38 -- nvmf/common.sh@119 -- # set +e 00:16:34.081 17:53:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.081 17:53:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:34.081 rmmod nvme_tcp 00:16:34.081 rmmod nvme_fabrics 00:16:34.081 rmmod nvme_keyring 00:16:34.081 17:53:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.081 17:53:38 -- nvmf/common.sh@123 -- # set -e 00:16:34.081 17:53:38 -- nvmf/common.sh@124 -- # return 0 00:16:34.081 17:53:38 -- nvmf/common.sh@477 -- # '[' -n 1637281 ']' 00:16:34.081 17:53:38 -- nvmf/common.sh@478 -- # killprocess 1637281 00:16:34.081 17:53:38 -- common/autotest_common.sh@926 -- # '[' -z 1637281 ']' 00:16:34.081 17:53:38 -- common/autotest_common.sh@930 -- # kill -0 1637281 00:16:34.081 17:53:38 -- common/autotest_common.sh@931 -- # uname 00:16:34.081 17:53:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:34.081 17:53:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1637281 00:16:34.081 17:53:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:34.081 17:53:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:34.081 17:53:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1637281' 00:16:34.081 killing process with pid 1637281 00:16:34.081 17:53:38 -- common/autotest_common.sh@945 -- # kill 1637281 00:16:34.081 17:53:38 -- common/autotest_common.sh@950 -- # wait 1637281 00:16:34.081 [2024-07-22 17:53:38.338518] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:34.342 17:53:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:34.342 17:53:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:34.342 17:53:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:34.342 17:53:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.342 17:53:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:34.342 17:53:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.342 17:53:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.342 17:53:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.252 17:53:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:36.252 00:16:36.252 real 0m7.049s 00:16:36.252 user 0m21.819s 00:16:36.252 sys 0m1.098s 00:16:36.252 17:53:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.252 17:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:36.252 ************************************ 00:16:36.252 END TEST nvmf_host_management 00:16:36.252 ************************************ 00:16:36.252 17:53:40 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:36.252 00:16:36.252 real 0m14.779s 00:16:36.252 user 0m23.775s 00:16:36.252 sys 0m6.770s 00:16:36.252 17:53:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.252 17:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:36.252 ************************************ 00:16:36.252 END TEST nvmf_host_management 00:16:36.252 ************************************ 00:16:36.252 17:53:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:36.252 17:53:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:36.252 17:53:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:36.252 17:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:36.513 ************************************ 00:16:36.513 START TEST nvmf_lvol 00:16:36.513 ************************************ 00:16:36.513 17:53:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:36.513 * Looking for test storage... 00:16:36.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.513 17:53:40 -- nvmf/common.sh@7 -- # uname -s 00:16:36.513 17:53:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.513 17:53:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.513 17:53:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.513 17:53:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.513 17:53:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.513 17:53:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.513 17:53:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.513 17:53:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.513 17:53:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.513 17:53:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.513 17:53:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:36.513 17:53:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:16:36.513 17:53:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.513 17:53:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.513 17:53:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.513 17:53:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:36.513 17:53:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.513 17:53:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.513 17:53:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.513 17:53:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.513 17:53:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.513 17:53:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.513 17:53:40 -- paths/export.sh@5 -- # export PATH 00:16:36.513 17:53:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.513 17:53:40 -- nvmf/common.sh@46 -- # : 0 00:16:36.513 17:53:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:36.513 17:53:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:36.513 17:53:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:36.513 17:53:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.513 17:53:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.513 17:53:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:36.513 17:53:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:36.513 17:53:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:36.513 17:53:40 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:36.513 17:53:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:36.513 17:53:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.513 17:53:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:36.513 17:53:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:36.513 17:53:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:36.513 17:53:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.513 17:53:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.513 17:53:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.513 17:53:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:36.513 17:53:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:36.513 17:53:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:36.513 17:53:40 -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 17:53:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:44.646 17:53:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:44.646 17:53:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:44.646 17:53:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:44.646 17:53:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:44.646 17:53:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:44.646 17:53:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:44.646 17:53:48 -- nvmf/common.sh@294 -- # net_devs=() 00:16:44.646 17:53:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:44.646 17:53:48 -- nvmf/common.sh@295 -- # e810=() 00:16:44.646 17:53:48 -- nvmf/common.sh@295 -- # local -ga e810 00:16:44.646 17:53:48 -- nvmf/common.sh@296 -- # x722=() 00:16:44.646 17:53:48 -- nvmf/common.sh@296 -- # local -ga x722 00:16:44.646 17:53:48 -- nvmf/common.sh@297 -- # mlx=() 00:16:44.646 17:53:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:44.646 17:53:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.646 17:53:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:44.646 17:53:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:44.646 17:53:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:44.646 17:53:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:44.646 17:53:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:44.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:44.646 17:53:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:44.646 17:53:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:44.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:44.646 17:53:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:44.646 17:53:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:44.646 17:53:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.646 17:53:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:44.646 17:53:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.646 17:53:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:44.646 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:44.646 17:53:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.646 17:53:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:44.646 17:53:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.646 17:53:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:44.646 17:53:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.646 17:53:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:44.646 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:44.646 17:53:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.646 17:53:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:44.646 17:53:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:44.646 17:53:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:44.646 17:53:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.646 17:53:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.646 17:53:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.646 17:53:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:44.646 17:53:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.646 17:53:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.646 17:53:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:44.646 17:53:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.646 17:53:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.646 17:53:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:44.646 17:53:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:44.646 17:53:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.646 17:53:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.646 17:53:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.646 17:53:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.646 17:53:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:44.646 17:53:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.646 17:53:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.646 17:53:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.646 17:53:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:44.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:16:44.646 00:16:44.646 --- 10.0.0.2 ping statistics --- 00:16:44.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.646 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:16:44.646 17:53:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:16:44.646 00:16:44.646 --- 10.0.0.1 ping statistics --- 00:16:44.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.646 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:16:44.646 17:53:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.646 17:53:48 -- nvmf/common.sh@410 -- # return 0 00:16:44.646 17:53:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.646 17:53:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.646 17:53:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.646 17:53:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.646 17:53:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.646 17:53:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:44.646 17:53:48 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:44.646 17:53:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:44.646 17:53:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:44.646 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:16:44.646 17:53:48 -- nvmf/common.sh@469 -- # nvmfpid=1642621 00:16:44.646 17:53:48 -- nvmf/common.sh@470 -- # waitforlisten 1642621 00:16:44.646 17:53:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:44.646 17:53:48 -- common/autotest_common.sh@819 -- # '[' -z 1642621 ']' 00:16:44.646 17:53:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.646 17:53:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:44.646 17:53:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.646 17:53:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:44.646 17:53:48 -- common/autotest_common.sh@10 -- # set +x 00:16:44.906 [2024-07-22 17:53:48.941044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:44.906 [2024-07-22 17:53:48.941108] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.906 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.906 [2024-07-22 17:53:49.034930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.906 [2024-07-22 17:53:49.124270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.907 [2024-07-22 17:53:49.124461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.907 [2024-07-22 17:53:49.124472] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.907 [2024-07-22 17:53:49.124480] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.907 [2024-07-22 17:53:49.124556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.907 [2024-07-22 17:53:49.124680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.907 [2024-07-22 17:53:49.124683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.845 17:53:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:45.845 17:53:49 -- common/autotest_common.sh@852 -- # return 0 00:16:45.845 17:53:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.845 17:53:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.845 17:53:49 -- common/autotest_common.sh@10 -- # set +x 00:16:45.845 17:53:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.845 17:53:49 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:45.845 [2024-07-22 17:53:49.986078] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.845 17:53:50 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.105 17:53:50 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:46.105 17:53:50 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.105 17:53:50 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:46.105 17:53:50 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:46.364 17:53:50 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:46.624 17:53:50 -- target/nvmf_lvol.sh@29 -- # lvs=d77b5014-fe49-4c29-9bfd-fea5523efa67 00:16:46.624 17:53:50 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d77b5014-fe49-4c29-9bfd-fea5523efa67 lvol 20 00:16:46.884 17:53:50 -- target/nvmf_lvol.sh@32 -- # lvol=2d3f2eb5-fe99-4d0a-aa2f-462d4a0b658d 00:16:46.884 17:53:50 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:46.884 17:53:51 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d3f2eb5-fe99-4d0a-aa2f-462d4a0b658d 00:16:47.142 17:53:51 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:47.401 [2024-07-22 17:53:51.532417] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.401 17:53:51 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:47.661 17:53:51 -- target/nvmf_lvol.sh@42 -- # perf_pid=1643148 00:16:47.661 17:53:51 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:47.661 17:53:51 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:47.661 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.600 17:53:52 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2d3f2eb5-fe99-4d0a-aa2f-462d4a0b658d MY_SNAPSHOT 00:16:48.863 17:53:52 -- target/nvmf_lvol.sh@47 -- # snapshot=0d8c2bde-a9f9-4da2-bed3-6168d8e1d0d6 00:16:48.863 17:53:52 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2d3f2eb5-fe99-4d0a-aa2f-462d4a0b658d 30 00:16:49.123 17:53:53 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0d8c2bde-a9f9-4da2-bed3-6168d8e1d0d6 MY_CLONE 00:16:49.382 17:53:53 -- target/nvmf_lvol.sh@49 -- # clone=f57d11a5-3d83-4cc3-842e-9db68682aa58 00:16:49.382 17:53:53 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f57d11a5-3d83-4cc3-842e-9db68682aa58 00:16:49.642 17:53:53 -- target/nvmf_lvol.sh@53 -- # wait 1643148 00:16:59.627 Initializing NVMe Controllers 00:16:59.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:59.627 Controller IO queue size 128, less than required. 00:16:59.627 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:59.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:59.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:59.627 Initialization complete. Launching workers. 00:16:59.628 ======================================================== 00:16:59.628 Latency(us) 00:16:59.628 Device Information : IOPS MiB/s Average min max 00:16:59.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 13296.30 51.94 9632.08 1428.15 56510.32 00:16:59.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 13334.30 52.09 9601.27 860.36 50329.78 00:16:59.628 ======================================================== 00:16:59.628 Total : 26630.60 104.03 9616.65 860.36 56510.32 00:16:59.628 00:16:59.628 17:54:02 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:59.628 17:54:02 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2d3f2eb5-fe99-4d0a-aa2f-462d4a0b658d 00:16:59.628 17:54:02 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d77b5014-fe49-4c29-9bfd-fea5523efa67 00:16:59.628 17:54:02 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:59.628 17:54:02 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:59.628 17:54:02 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:59.628 17:54:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.628 17:54:02 -- nvmf/common.sh@116 -- # sync 00:16:59.628 17:54:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.628 17:54:02 -- nvmf/common.sh@119 -- # set +e 00:16:59.628 17:54:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.628 17:54:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.628 rmmod nvme_tcp 00:16:59.628 rmmod nvme_fabrics 00:16:59.628 rmmod nvme_keyring 00:16:59.628 17:54:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.628 17:54:02 -- nvmf/common.sh@123 -- # set -e 00:16:59.628 17:54:02 -- nvmf/common.sh@124 -- # return 0 00:16:59.628 17:54:02 -- nvmf/common.sh@477 -- # '[' -n 1642621 ']' 00:16:59.628 17:54:02 -- nvmf/common.sh@478 -- # killprocess 1642621 00:16:59.628 17:54:02 -- common/autotest_common.sh@926 -- # '[' -z 1642621 ']' 00:16:59.628 17:54:02 -- common/autotest_common.sh@930 -- # kill -0 1642621 00:16:59.628 17:54:02 -- common/autotest_common.sh@931 -- # uname 00:16:59.628 17:54:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.628 17:54:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1642621 00:16:59.628 17:54:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:59.628 17:54:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:59.628 17:54:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1642621' 00:16:59.628 killing process with pid 1642621 00:16:59.628 17:54:02 -- common/autotest_common.sh@945 -- # kill 1642621 00:16:59.628 17:54:02 -- common/autotest_common.sh@950 -- # wait 1642621 00:16:59.628 17:54:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.628 17:54:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:59.628 17:54:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:59.628 17:54:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.628 17:54:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:59.628 17:54:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.628 17:54:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.628 17:54:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.008 17:54:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:01.008 00:17:01.008 real 0m24.524s 00:17:01.008 user 1m5.444s 00:17:01.008 sys 0m8.542s 00:17:01.008 17:54:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.008 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:17:01.008 ************************************ 00:17:01.008 END TEST nvmf_lvol 00:17:01.008 ************************************ 00:17:01.008 17:54:05 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:01.008 17:54:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:01.008 17:54:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.008 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:17:01.008 ************************************ 00:17:01.008 START TEST nvmf_lvs_grow 00:17:01.008 ************************************ 00:17:01.008 17:54:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:01.008 * Looking for test storage... 00:17:01.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.008 17:54:05 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.008 17:54:05 -- nvmf/common.sh@7 -- # uname -s 00:17:01.008 17:54:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.008 17:54:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.008 17:54:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.008 17:54:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.008 17:54:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.008 17:54:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.008 17:54:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.008 17:54:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.008 17:54:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.008 17:54:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.009 17:54:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:01.009 17:54:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:01.009 17:54:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.009 17:54:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.009 17:54:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.009 17:54:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.009 17:54:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.009 17:54:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.009 17:54:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.009 17:54:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.009 17:54:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.009 17:54:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.009 17:54:05 -- paths/export.sh@5 -- # export PATH 00:17:01.009 17:54:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.009 17:54:05 -- nvmf/common.sh@46 -- # : 0 00:17:01.009 17:54:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.009 17:54:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.009 17:54:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.009 17:54:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.009 17:54:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.009 17:54:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.009 17:54:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.009 17:54:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.009 17:54:05 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.009 17:54:05 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.009 17:54:05 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:01.009 17:54:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:01.009 17:54:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.009 17:54:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:01.009 17:54:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:01.009 17:54:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:01.009 17:54:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.009 17:54:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.009 17:54:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.009 17:54:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:01.009 17:54:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:01.009 17:54:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:01.009 17:54:05 -- common/autotest_common.sh@10 -- # set +x 00:17:09.140 17:54:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:09.140 17:54:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:09.140 17:54:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:09.140 17:54:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:09.140 17:54:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:09.140 17:54:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:09.140 17:54:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:09.140 17:54:13 -- nvmf/common.sh@294 -- # net_devs=() 00:17:09.141 17:54:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:09.141 17:54:13 -- nvmf/common.sh@295 -- # e810=() 00:17:09.141 17:54:13 -- nvmf/common.sh@295 -- # local -ga e810 00:17:09.141 17:54:13 -- nvmf/common.sh@296 -- # x722=() 00:17:09.141 17:54:13 -- nvmf/common.sh@296 -- # local -ga x722 00:17:09.141 17:54:13 -- nvmf/common.sh@297 -- # mlx=() 00:17:09.141 17:54:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:09.141 17:54:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.141 17:54:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:09.141 17:54:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:09.141 17:54:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:09.141 17:54:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:09.141 17:54:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:09.141 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:09.141 17:54:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:09.141 17:54:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:09.141 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:09.141 17:54:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:09.141 17:54:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:09.141 17:54:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.141 17:54:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:09.141 17:54:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.141 17:54:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:09.141 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:09.141 17:54:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.141 17:54:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:09.141 17:54:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.141 17:54:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:09.141 17:54:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.141 17:54:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:09.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:09.141 17:54:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.141 17:54:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:09.141 17:54:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:09.141 17:54:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:09.141 17:54:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:09.141 17:54:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.141 17:54:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.141 17:54:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.141 17:54:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:09.141 17:54:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.141 17:54:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.141 17:54:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:09.141 17:54:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.141 17:54:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.141 17:54:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:09.141 17:54:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:09.141 17:54:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.141 17:54:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.141 17:54:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.141 17:54:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.141 17:54:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:09.141 17:54:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.402 17:54:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.402 17:54:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.402 17:54:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:09.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:17:09.402 00:17:09.402 --- 10.0.0.2 ping statistics --- 00:17:09.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.402 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:17:09.402 17:54:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:17:09.402 00:17:09.402 --- 10.0.0.1 ping statistics --- 00:17:09.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.402 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:17:09.402 17:54:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.402 17:54:13 -- nvmf/common.sh@410 -- # return 0 00:17:09.402 17:54:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:09.402 17:54:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.402 17:54:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:09.402 17:54:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:09.402 17:54:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.402 17:54:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:09.402 17:54:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:09.402 17:54:13 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:09.402 17:54:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:09.402 17:54:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:09.402 17:54:13 -- common/autotest_common.sh@10 -- # set +x 00:17:09.402 17:54:13 -- nvmf/common.sh@469 -- # nvmfpid=1649958 00:17:09.402 17:54:13 -- nvmf/common.sh@470 -- # waitforlisten 1649958 00:17:09.402 17:54:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:09.402 17:54:13 -- common/autotest_common.sh@819 -- # '[' -z 1649958 ']' 00:17:09.402 17:54:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.402 17:54:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.402 17:54:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.402 17:54:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.402 17:54:13 -- common/autotest_common.sh@10 -- # set +x 00:17:09.402 [2024-07-22 17:54:13.628236] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:09.402 [2024-07-22 17:54:13.628289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.402 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.662 [2024-07-22 17:54:13.718128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.662 [2024-07-22 17:54:13.808301] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.662 [2024-07-22 17:54:13.808476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.662 [2024-07-22 17:54:13.808486] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.662 [2024-07-22 17:54:13.808494] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.662 [2024-07-22 17:54:13.808532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.233 17:54:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.233 17:54:14 -- common/autotest_common.sh@852 -- # return 0 00:17:10.233 17:54:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:10.233 17:54:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:10.233 17:54:14 -- common/autotest_common.sh@10 -- # set +x 00:17:10.493 17:54:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:10.493 [2024-07-22 17:54:14.703014] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:10.493 17:54:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:10.493 17:54:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:10.493 17:54:14 -- common/autotest_common.sh@10 -- # set +x 00:17:10.493 ************************************ 00:17:10.493 START TEST lvs_grow_clean 00:17:10.493 ************************************ 00:17:10.493 17:54:14 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:10.493 17:54:14 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.753 17:54:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:10.753 17:54:14 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:11.325 17:54:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:11.325 17:54:15 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:11.325 17:54:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:11.922 17:54:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:11.922 17:54:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:11.922 17:54:16 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a lvol 150 00:17:12.188 17:54:16 -- target/nvmf_lvs_grow.sh@33 -- # lvol=838204d3-8245-4a9e-94de-4496c06df8c3 00:17:12.188 17:54:16 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:12.188 17:54:16 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:12.188 [2024-07-22 17:54:16.397705] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:12.188 [2024-07-22 17:54:16.397775] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:12.188 true 00:17:12.188 17:54:16 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:12.188 17:54:16 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:12.758 17:54:16 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:12.758 17:54:16 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:13.018 17:54:17 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 838204d3-8245-4a9e-94de-4496c06df8c3 00:17:13.588 17:54:17 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:13.588 [2024-07-22 17:54:17.837923] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.588 17:54:17 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:13.848 17:54:18 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1650748 00:17:13.848 17:54:18 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:13.848 17:54:18 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1650748 /var/tmp/bdevperf.sock 00:17:13.848 17:54:18 -- common/autotest_common.sh@819 -- # '[' -z 1650748 ']' 00:17:13.848 17:54:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.848 17:54:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:13.848 17:54:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.848 17:54:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:13.848 17:54:18 -- common/autotest_common.sh@10 -- # set +x 00:17:13.848 17:54:18 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:13.848 [2024-07-22 17:54:18.103493] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:13.848 [2024-07-22 17:54:18.103619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650748 ] 00:17:14.108 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.108 [2024-07-22 17:54:18.217216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.108 [2024-07-22 17:54:18.277312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.047 17:54:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.047 17:54:19 -- common/autotest_common.sh@852 -- # return 0 00:17:15.047 17:54:19 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:15.617 Nvme0n1 00:17:15.617 17:54:19 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:15.617 [ 00:17:15.617 { 00:17:15.617 "name": "Nvme0n1", 00:17:15.617 "aliases": [ 00:17:15.617 "838204d3-8245-4a9e-94de-4496c06df8c3" 00:17:15.617 ], 00:17:15.617 "product_name": "NVMe disk", 00:17:15.617 "block_size": 4096, 00:17:15.617 "num_blocks": 38912, 00:17:15.617 "uuid": "838204d3-8245-4a9e-94de-4496c06df8c3", 00:17:15.617 "assigned_rate_limits": { 00:17:15.617 "rw_ios_per_sec": 0, 00:17:15.617 "rw_mbytes_per_sec": 0, 00:17:15.617 "r_mbytes_per_sec": 0, 00:17:15.617 "w_mbytes_per_sec": 0 00:17:15.617 }, 00:17:15.617 "claimed": false, 00:17:15.617 "zoned": false, 00:17:15.617 "supported_io_types": { 00:17:15.617 "read": true, 00:17:15.617 "write": true, 00:17:15.617 "unmap": true, 00:17:15.617 "write_zeroes": true, 00:17:15.617 "flush": true, 00:17:15.617 "reset": true, 00:17:15.617 "compare": true, 00:17:15.617 "compare_and_write": true, 00:17:15.617 "abort": true, 00:17:15.617 "nvme_admin": true, 00:17:15.617 "nvme_io": true 00:17:15.617 }, 00:17:15.617 "driver_specific": { 00:17:15.617 "nvme": [ 00:17:15.617 { 00:17:15.617 "trid": { 00:17:15.617 "trtype": "TCP", 00:17:15.617 "adrfam": "IPv4", 00:17:15.617 "traddr": "10.0.0.2", 00:17:15.617 "trsvcid": "4420", 00:17:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:15.617 }, 00:17:15.617 "ctrlr_data": { 00:17:15.617 "cntlid": 1, 00:17:15.617 "vendor_id": "0x8086", 00:17:15.617 "model_number": "SPDK bdev Controller", 00:17:15.617 "serial_number": "SPDK0", 00:17:15.617 "firmware_revision": "24.01.1", 00:17:15.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:15.617 "oacs": { 00:17:15.617 "security": 0, 00:17:15.617 "format": 0, 00:17:15.617 "firmware": 0, 00:17:15.617 "ns_manage": 0 00:17:15.617 }, 00:17:15.617 "multi_ctrlr": true, 00:17:15.617 "ana_reporting": false 00:17:15.617 }, 00:17:15.617 "vs": { 00:17:15.617 "nvme_version": "1.3" 00:17:15.617 }, 00:17:15.617 "ns_data": { 00:17:15.617 "id": 1, 00:17:15.617 "can_share": true 00:17:15.617 } 00:17:15.617 } 00:17:15.617 ], 00:17:15.617 "mp_policy": "active_passive" 00:17:15.617 } 00:17:15.617 } 00:17:15.617 ] 00:17:15.617 17:54:19 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1651058 00:17:15.617 17:54:19 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:15.617 17:54:19 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:15.617 Running I/O for 10 seconds... 00:17:16.999 Latency(us) 00:17:16.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.999 Nvme0n1 : 1.00 20106.00 78.54 0.00 0.00 0.00 0.00 0.00 00:17:16.999 =================================================================================================================== 00:17:16.999 Total : 20106.00 78.54 0.00 0.00 0.00 0.00 0.00 00:17:16.999 00:17:17.568 17:54:21 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:17.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.827 Nvme0n1 : 2.00 20229.00 79.02 0.00 0.00 0.00 0.00 0.00 00:17:17.827 =================================================================================================================== 00:17:17.827 Total : 20229.00 79.02 0.00 0.00 0.00 0.00 0.00 00:17:17.827 00:17:17.827 true 00:17:17.827 17:54:21 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:17.827 17:54:21 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:18.086 17:54:22 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:18.087 17:54:22 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:18.087 17:54:22 -- target/nvmf_lvs_grow.sh@65 -- # wait 1651058 00:17:18.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.655 Nvme0n1 : 3.00 20267.33 79.17 0.00 0.00 0.00 0.00 0.00 00:17:18.655 =================================================================================================================== 00:17:18.655 Total : 20267.33 79.17 0.00 0.00 0.00 0.00 0.00 00:17:18.655 00:17:20.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.031 Nvme0n1 : 4.00 20304.50 79.31 0.00 0.00 0.00 0.00 0.00 00:17:20.031 =================================================================================================================== 00:17:20.031 Total : 20304.50 79.31 0.00 0.00 0.00 0.00 0.00 00:17:20.031 00:17:20.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.967 Nvme0n1 : 5.00 20326.60 79.40 0.00 0.00 0.00 0.00 0.00 00:17:20.967 =================================================================================================================== 00:17:20.967 Total : 20326.60 79.40 0.00 0.00 0.00 0.00 0.00 00:17:20.967 00:17:21.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.902 Nvme0n1 : 6.00 20351.83 79.50 0.00 0.00 0.00 0.00 0.00 00:17:21.902 =================================================================================================================== 00:17:21.902 Total : 20351.83 79.50 0.00 0.00 0.00 0.00 0.00 00:17:21.902 00:17:22.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.837 Nvme0n1 : 7.00 20370.14 79.57 0.00 0.00 0.00 0.00 0.00 00:17:22.837 =================================================================================================================== 00:17:22.837 Total : 20370.14 79.57 0.00 0.00 0.00 0.00 0.00 00:17:22.837 00:17:23.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.816 Nvme0n1 : 8.00 20383.88 79.62 0.00 0.00 0.00 0.00 0.00 00:17:23.816 =================================================================================================================== 00:17:23.816 Total : 20383.88 79.62 0.00 0.00 0.00 0.00 0.00 00:17:23.816 00:17:24.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.752 Nvme0n1 : 9.00 20394.44 79.67 0.00 0.00 0.00 0.00 0.00 00:17:24.752 =================================================================================================================== 00:17:24.752 Total : 20394.44 79.67 0.00 0.00 0.00 0.00 0.00 00:17:24.752 00:17:25.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.689 Nvme0n1 : 10.00 20409.30 79.72 0.00 0.00 0.00 0.00 0.00 00:17:25.689 =================================================================================================================== 00:17:25.689 Total : 20409.30 79.72 0.00 0.00 0.00 0.00 0.00 00:17:25.689 00:17:25.689 00:17:25.689 Latency(us) 00:17:25.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.689 Nvme0n1 : 10.01 20409.19 79.72 0.00 0.00 6267.23 3856.54 11090.71 00:17:25.689 =================================================================================================================== 00:17:25.689 Total : 20409.19 79.72 0.00 0.00 6267.23 3856.54 11090.71 00:17:25.689 0 00:17:25.689 17:54:29 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1650748 00:17:25.689 17:54:29 -- common/autotest_common.sh@926 -- # '[' -z 1650748 ']' 00:17:25.689 17:54:29 -- common/autotest_common.sh@930 -- # kill -0 1650748 00:17:25.689 17:54:29 -- common/autotest_common.sh@931 -- # uname 00:17:25.689 17:54:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:25.689 17:54:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1650748 00:17:25.948 17:54:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:25.948 17:54:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:25.948 17:54:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1650748' 00:17:25.948 killing process with pid 1650748 00:17:25.948 17:54:29 -- common/autotest_common.sh@945 -- # kill 1650748 00:17:25.948 Received shutdown signal, test time was about 10.000000 seconds 00:17:25.948 00:17:25.948 Latency(us) 00:17:25.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.948 =================================================================================================================== 00:17:25.948 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:25.948 17:54:29 -- common/autotest_common.sh@950 -- # wait 1650748 00:17:25.948 17:54:30 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:26.208 17:54:30 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:26.208 17:54:30 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:26.774 17:54:30 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:26.774 17:54:30 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:26.774 17:54:30 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:26.774 [2024-07-22 17:54:31.001928] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:26.774 17:54:31 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:26.774 17:54:31 -- common/autotest_common.sh@640 -- # local es=0 00:17:26.774 17:54:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:26.775 17:54:31 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.775 17:54:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.775 17:54:31 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.775 17:54:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.775 17:54:31 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.775 17:54:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:26.775 17:54:31 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.775 17:54:31 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:26.775 17:54:31 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:27.033 request: 00:17:27.034 { 00:17:27.034 "uuid": "618aa5c2-9b15-4369-9b17-2e99fc01cf7a", 00:17:27.034 "method": "bdev_lvol_get_lvstores", 00:17:27.034 "req_id": 1 00:17:27.034 } 00:17:27.034 Got JSON-RPC error response 00:17:27.034 response: 00:17:27.034 { 00:17:27.034 "code": -19, 00:17:27.034 "message": "No such device" 00:17:27.034 } 00:17:27.034 17:54:31 -- common/autotest_common.sh@643 -- # es=1 00:17:27.034 17:54:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:27.034 17:54:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:27.034 17:54:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:27.034 17:54:31 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:27.293 aio_bdev 00:17:27.293 17:54:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 838204d3-8245-4a9e-94de-4496c06df8c3 00:17:27.293 17:54:31 -- common/autotest_common.sh@887 -- # local bdev_name=838204d3-8245-4a9e-94de-4496c06df8c3 00:17:27.293 17:54:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:27.293 17:54:31 -- common/autotest_common.sh@889 -- # local i 00:17:27.293 17:54:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:27.293 17:54:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:27.293 17:54:31 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:27.552 17:54:31 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 838204d3-8245-4a9e-94de-4496c06df8c3 -t 2000 00:17:27.552 [ 00:17:27.552 { 00:17:27.552 "name": "838204d3-8245-4a9e-94de-4496c06df8c3", 00:17:27.552 "aliases": [ 00:17:27.552 "lvs/lvol" 00:17:27.552 ], 00:17:27.552 "product_name": "Logical Volume", 00:17:27.552 "block_size": 4096, 00:17:27.552 "num_blocks": 38912, 00:17:27.552 "uuid": "838204d3-8245-4a9e-94de-4496c06df8c3", 00:17:27.552 "assigned_rate_limits": { 00:17:27.552 "rw_ios_per_sec": 0, 00:17:27.552 "rw_mbytes_per_sec": 0, 00:17:27.552 "r_mbytes_per_sec": 0, 00:17:27.552 "w_mbytes_per_sec": 0 00:17:27.552 }, 00:17:27.552 "claimed": false, 00:17:27.552 "zoned": false, 00:17:27.552 "supported_io_types": { 00:17:27.552 "read": true, 00:17:27.552 "write": true, 00:17:27.552 "unmap": true, 00:17:27.552 "write_zeroes": true, 00:17:27.552 "flush": false, 00:17:27.552 "reset": true, 00:17:27.552 "compare": false, 00:17:27.552 "compare_and_write": false, 00:17:27.552 "abort": false, 00:17:27.552 "nvme_admin": false, 00:17:27.552 "nvme_io": false 00:17:27.552 }, 00:17:27.552 "driver_specific": { 00:17:27.552 "lvol": { 00:17:27.552 "lvol_store_uuid": "618aa5c2-9b15-4369-9b17-2e99fc01cf7a", 00:17:27.552 "base_bdev": "aio_bdev", 00:17:27.552 "thin_provision": false, 00:17:27.552 "snapshot": false, 00:17:27.552 "clone": false, 00:17:27.552 "esnap_clone": false 00:17:27.552 } 00:17:27.552 } 00:17:27.552 } 00:17:27.552 ] 00:17:27.552 17:54:31 -- common/autotest_common.sh@895 -- # return 0 00:17:27.552 17:54:31 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:27.552 17:54:31 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:27.812 17:54:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:27.812 17:54:31 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:27.812 17:54:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:28.071 17:54:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:28.071 17:54:32 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 838204d3-8245-4a9e-94de-4496c06df8c3 00:17:28.639 17:54:32 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 618aa5c2-9b15-4369-9b17-2e99fc01cf7a 00:17:28.639 17:54:32 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:28.899 17:54:33 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.899 00:17:28.899 real 0m18.345s 00:17:28.899 user 0m18.289s 00:17:28.899 sys 0m1.550s 00:17:28.899 17:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:28.899 17:54:33 -- common/autotest_common.sh@10 -- # set +x 00:17:28.899 ************************************ 00:17:28.899 END TEST lvs_grow_clean 00:17:28.899 ************************************ 00:17:28.899 17:54:33 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:28.899 17:54:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:28.899 17:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:28.899 17:54:33 -- common/autotest_common.sh@10 -- # set +x 00:17:28.899 ************************************ 00:17:28.900 START TEST lvs_grow_dirty 00:17:28.900 ************************************ 00:17:28.900 17:54:33 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:28.900 17:54:33 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:29.159 17:54:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:29.159 17:54:33 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:29.419 17:54:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:29.419 17:54:33 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:29.419 17:54:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:29.677 17:54:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:29.677 17:54:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:29.678 17:54:33 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 lvol 150 00:17:29.678 17:54:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:29.678 17:54:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:29.678 17:54:33 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:29.936 [2024-07-22 17:54:34.076763] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:29.936 [2024-07-22 17:54:34.076815] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:29.936 true 00:17:29.936 17:54:34 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:29.936 17:54:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:30.196 17:54:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:30.196 17:54:34 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:30.196 17:54:34 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:30.455 17:54:34 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:30.714 17:54:34 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:30.714 17:54:34 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1653591 00:17:30.714 17:54:34 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.714 17:54:34 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:30.714 17:54:34 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1653591 /var/tmp/bdevperf.sock 00:17:30.714 17:54:34 -- common/autotest_common.sh@819 -- # '[' -z 1653591 ']' 00:17:30.714 17:54:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.714 17:54:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.714 17:54:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.714 17:54:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.714 17:54:34 -- common/autotest_common.sh@10 -- # set +x 00:17:30.973 [2024-07-22 17:54:35.012880] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:30.973 [2024-07-22 17:54:35.012930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653591 ] 00:17:30.973 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.973 [2024-07-22 17:54:35.073043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.973 [2024-07-22 17:54:35.132543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.908 17:54:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.908 17:54:35 -- common/autotest_common.sh@852 -- # return 0 00:17:31.908 17:54:35 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:31.908 Nvme0n1 00:17:31.908 17:54:36 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:32.167 [ 00:17:32.167 { 00:17:32.167 "name": "Nvme0n1", 00:17:32.167 "aliases": [ 00:17:32.167 "c20790cb-901b-4de5-9946-4377fc47f1c1" 00:17:32.167 ], 00:17:32.167 "product_name": "NVMe disk", 00:17:32.167 "block_size": 4096, 00:17:32.167 "num_blocks": 38912, 00:17:32.167 "uuid": "c20790cb-901b-4de5-9946-4377fc47f1c1", 00:17:32.167 "assigned_rate_limits": { 00:17:32.167 "rw_ios_per_sec": 0, 00:17:32.167 "rw_mbytes_per_sec": 0, 00:17:32.167 "r_mbytes_per_sec": 0, 00:17:32.167 "w_mbytes_per_sec": 0 00:17:32.167 }, 00:17:32.167 "claimed": false, 00:17:32.167 "zoned": false, 00:17:32.167 "supported_io_types": { 00:17:32.167 "read": true, 00:17:32.167 "write": true, 00:17:32.167 "unmap": true, 00:17:32.167 "write_zeroes": true, 00:17:32.167 "flush": true, 00:17:32.167 "reset": true, 00:17:32.167 "compare": true, 00:17:32.167 "compare_and_write": true, 00:17:32.167 "abort": true, 00:17:32.167 "nvme_admin": true, 00:17:32.167 "nvme_io": true 00:17:32.167 }, 00:17:32.167 "driver_specific": { 00:17:32.167 "nvme": [ 00:17:32.167 { 00:17:32.167 "trid": { 00:17:32.167 "trtype": "TCP", 00:17:32.167 "adrfam": "IPv4", 00:17:32.167 "traddr": "10.0.0.2", 00:17:32.167 "trsvcid": "4420", 00:17:32.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:32.167 }, 00:17:32.167 "ctrlr_data": { 00:17:32.168 "cntlid": 1, 00:17:32.168 "vendor_id": "0x8086", 00:17:32.168 "model_number": "SPDK bdev Controller", 00:17:32.168 "serial_number": "SPDK0", 00:17:32.168 "firmware_revision": "24.01.1", 00:17:32.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:32.168 "oacs": { 00:17:32.168 "security": 0, 00:17:32.168 "format": 0, 00:17:32.168 "firmware": 0, 00:17:32.168 "ns_manage": 0 00:17:32.168 }, 00:17:32.168 "multi_ctrlr": true, 00:17:32.168 "ana_reporting": false 00:17:32.168 }, 00:17:32.168 "vs": { 00:17:32.168 "nvme_version": "1.3" 00:17:32.168 }, 00:17:32.168 "ns_data": { 00:17:32.168 "id": 1, 00:17:32.168 "can_share": true 00:17:32.168 } 00:17:32.168 } 00:17:32.168 ], 00:17:32.168 "mp_policy": "active_passive" 00:17:32.168 } 00:17:32.168 } 00:17:32.168 ] 00:17:32.168 17:54:36 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1653897 00:17:32.168 17:54:36 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.168 17:54:36 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:32.168 Running I/O for 10 seconds... 00:17:33.544 Latency(us) 00:17:33.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.544 Nvme0n1 : 1.00 19557.00 76.39 0.00 0.00 0.00 0.00 0.00 00:17:33.544 =================================================================================================================== 00:17:33.544 Total : 19557.00 76.39 0.00 0.00 0.00 0.00 0.00 00:17:33.544 00:17:34.111 17:54:38 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:34.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.369 Nvme0n1 : 2.00 19682.50 76.88 0.00 0.00 0.00 0.00 0.00 00:17:34.369 =================================================================================================================== 00:17:34.369 Total : 19682.50 76.88 0.00 0.00 0.00 0.00 0.00 00:17:34.369 00:17:34.369 true 00:17:34.369 17:54:38 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:34.369 17:54:38 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:34.627 17:54:38 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:34.627 17:54:38 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:34.627 17:54:38 -- target/nvmf_lvs_grow.sh@65 -- # wait 1653897 00:17:35.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.222 Nvme0n1 : 3.00 19737.67 77.10 0.00 0.00 0.00 0.00 0.00 00:17:35.222 =================================================================================================================== 00:17:35.222 Total : 19737.67 77.10 0.00 0.00 0.00 0.00 0.00 00:17:35.222 00:17:36.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.182 Nvme0n1 : 4.00 19775.25 77.25 0.00 0.00 0.00 0.00 0.00 00:17:36.182 =================================================================================================================== 00:17:36.182 Total : 19775.25 77.25 0.00 0.00 0.00 0.00 0.00 00:17:36.182 00:17:37.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.559 Nvme0n1 : 5.00 19809.00 77.38 0.00 0.00 0.00 0.00 0.00 00:17:37.559 =================================================================================================================== 00:17:37.559 Total : 19809.00 77.38 0.00 0.00 0.00 0.00 0.00 00:17:37.559 00:17:38.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.496 Nvme0n1 : 6.00 19834.17 77.48 0.00 0.00 0.00 0.00 0.00 00:17:38.496 =================================================================================================================== 00:17:38.496 Total : 19834.17 77.48 0.00 0.00 0.00 0.00 0.00 00:17:38.496 00:17:39.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.433 Nvme0n1 : 7.00 19854.43 77.56 0.00 0.00 0.00 0.00 0.00 00:17:39.433 =================================================================================================================== 00:17:39.433 Total : 19854.43 77.56 0.00 0.00 0.00 0.00 0.00 00:17:39.433 00:17:40.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.369 Nvme0n1 : 8.00 19872.62 77.63 0.00 0.00 0.00 0.00 0.00 00:17:40.369 =================================================================================================================== 00:17:40.369 Total : 19872.62 77.63 0.00 0.00 0.00 0.00 0.00 00:17:40.369 00:17:41.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.305 Nvme0n1 : 9.00 19887.67 77.69 0.00 0.00 0.00 0.00 0.00 00:17:41.305 =================================================================================================================== 00:17:41.305 Total : 19887.67 77.69 0.00 0.00 0.00 0.00 0.00 00:17:41.305 00:17:42.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.240 Nvme0n1 : 10.00 19900.50 77.74 0.00 0.00 0.00 0.00 0.00 00:17:42.240 =================================================================================================================== 00:17:42.240 Total : 19900.50 77.74 0.00 0.00 0.00 0.00 0.00 00:17:42.240 00:17:42.240 00:17:42.240 Latency(us) 00:17:42.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.240 Nvme0n1 : 10.01 19900.17 77.74 0.00 0.00 6428.10 4889.99 15123.69 00:17:42.240 =================================================================================================================== 00:17:42.240 Total : 19900.17 77.74 0.00 0.00 6428.10 4889.99 15123.69 00:17:42.240 0 00:17:42.240 17:54:46 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1653591 00:17:42.240 17:54:46 -- common/autotest_common.sh@926 -- # '[' -z 1653591 ']' 00:17:42.240 17:54:46 -- common/autotest_common.sh@930 -- # kill -0 1653591 00:17:42.240 17:54:46 -- common/autotest_common.sh@931 -- # uname 00:17:42.240 17:54:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.240 17:54:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1653591 00:17:42.499 17:54:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:42.499 17:54:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:42.499 17:54:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1653591' 00:17:42.499 killing process with pid 1653591 00:17:42.499 17:54:46 -- common/autotest_common.sh@945 -- # kill 1653591 00:17:42.499 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.499 00:17:42.499 Latency(us) 00:17:42.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.499 =================================================================================================================== 00:17:42.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.499 17:54:46 -- common/autotest_common.sh@950 -- # wait 1653591 00:17:42.499 17:54:46 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:42.758 17:54:46 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:42.758 17:54:46 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:42.758 17:54:47 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:42.758 17:54:47 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:42.758 17:54:47 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1649958 00:17:42.758 17:54:47 -- target/nvmf_lvs_grow.sh@74 -- # wait 1649958 00:17:43.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1649958 Killed "${NVMF_APP[@]}" "$@" 00:17:43.018 17:54:47 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:43.018 17:54:47 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:43.018 17:54:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.018 17:54:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:43.018 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.018 17:54:47 -- nvmf/common.sh@469 -- # nvmfpid=1655631 00:17:43.018 17:54:47 -- nvmf/common.sh@470 -- # waitforlisten 1655631 00:17:43.018 17:54:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:43.018 17:54:47 -- common/autotest_common.sh@819 -- # '[' -z 1655631 ']' 00:17:43.018 17:54:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.018 17:54:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.018 17:54:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.018 17:54:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.018 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.018 [2024-07-22 17:54:47.136222] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:43.018 [2024-07-22 17:54:47.136276] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.018 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.018 [2024-07-22 17:54:47.224699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.018 [2024-07-22 17:54:47.284312] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.018 [2024-07-22 17:54:47.284430] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.018 [2024-07-22 17:54:47.284438] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.018 [2024-07-22 17:54:47.284445] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.018 [2024-07-22 17:54:47.284462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.955 17:54:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.955 17:54:47 -- common/autotest_common.sh@852 -- # return 0 00:17:43.955 17:54:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.955 17:54:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:43.955 17:54:47 -- common/autotest_common.sh@10 -- # set +x 00:17:43.955 17:54:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.955 17:54:47 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:43.955 [2024-07-22 17:54:48.167381] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:43.955 [2024-07-22 17:54:48.167471] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:43.955 [2024-07-22 17:54:48.167499] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:43.955 17:54:48 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:43.955 17:54:48 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:43.955 17:54:48 -- common/autotest_common.sh@887 -- # local bdev_name=c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:43.955 17:54:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.955 17:54:48 -- common/autotest_common.sh@889 -- # local i 00:17:43.955 17:54:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.955 17:54:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.955 17:54:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.214 17:54:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c20790cb-901b-4de5-9946-4377fc47f1c1 -t 2000 00:17:44.473 [ 00:17:44.473 { 00:17:44.473 "name": "c20790cb-901b-4de5-9946-4377fc47f1c1", 00:17:44.473 "aliases": [ 00:17:44.473 "lvs/lvol" 00:17:44.473 ], 00:17:44.473 "product_name": "Logical Volume", 00:17:44.473 "block_size": 4096, 00:17:44.473 "num_blocks": 38912, 00:17:44.473 "uuid": "c20790cb-901b-4de5-9946-4377fc47f1c1", 00:17:44.473 "assigned_rate_limits": { 00:17:44.473 "rw_ios_per_sec": 0, 00:17:44.473 "rw_mbytes_per_sec": 0, 00:17:44.473 "r_mbytes_per_sec": 0, 00:17:44.473 "w_mbytes_per_sec": 0 00:17:44.473 }, 00:17:44.473 "claimed": false, 00:17:44.473 "zoned": false, 00:17:44.473 "supported_io_types": { 00:17:44.473 "read": true, 00:17:44.473 "write": true, 00:17:44.473 "unmap": true, 00:17:44.473 "write_zeroes": true, 00:17:44.473 "flush": false, 00:17:44.474 "reset": true, 00:17:44.474 "compare": false, 00:17:44.474 "compare_and_write": false, 00:17:44.474 "abort": false, 00:17:44.474 "nvme_admin": false, 00:17:44.474 "nvme_io": false 00:17:44.474 }, 00:17:44.474 "driver_specific": { 00:17:44.474 "lvol": { 00:17:44.474 "lvol_store_uuid": "5357fb5d-39b9-46e4-84e7-479f75c966c3", 00:17:44.474 "base_bdev": "aio_bdev", 00:17:44.474 "thin_provision": false, 00:17:44.474 "snapshot": false, 00:17:44.474 "clone": false, 00:17:44.474 "esnap_clone": false 00:17:44.474 } 00:17:44.474 } 00:17:44.474 } 00:17:44.474 ] 00:17:44.474 17:54:48 -- common/autotest_common.sh@895 -- # return 0 00:17:44.474 17:54:48 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:44.474 17:54:48 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:44.474 17:54:48 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:44.474 17:54:48 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:44.474 17:54:48 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:44.733 17:54:48 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:44.733 17:54:48 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:44.992 [2024-07-22 17:54:49.099873] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:44.992 17:54:49 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:44.992 17:54:49 -- common/autotest_common.sh@640 -- # local es=0 00:17:44.992 17:54:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:44.992 17:54:49 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.992 17:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.992 17:54:49 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.992 17:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.992 17:54:49 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.992 17:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:44.992 17:54:49 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.992 17:54:49 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:44.993 17:54:49 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:45.252 request: 00:17:45.252 { 00:17:45.252 "uuid": "5357fb5d-39b9-46e4-84e7-479f75c966c3", 00:17:45.252 "method": "bdev_lvol_get_lvstores", 00:17:45.252 "req_id": 1 00:17:45.252 } 00:17:45.252 Got JSON-RPC error response 00:17:45.252 response: 00:17:45.252 { 00:17:45.252 "code": -19, 00:17:45.252 "message": "No such device" 00:17:45.252 } 00:17:45.252 17:54:49 -- common/autotest_common.sh@643 -- # es=1 00:17:45.252 17:54:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:45.252 17:54:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:45.252 17:54:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:45.252 17:54:49 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:45.512 aio_bdev 00:17:45.512 17:54:49 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:45.512 17:54:49 -- common/autotest_common.sh@887 -- # local bdev_name=c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:45.512 17:54:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.512 17:54:49 -- common/autotest_common.sh@889 -- # local i 00:17:45.512 17:54:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.512 17:54:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.512 17:54:49 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:45.512 17:54:49 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c20790cb-901b-4de5-9946-4377fc47f1c1 -t 2000 00:17:45.771 [ 00:17:45.771 { 00:17:45.771 "name": "c20790cb-901b-4de5-9946-4377fc47f1c1", 00:17:45.771 "aliases": [ 00:17:45.771 "lvs/lvol" 00:17:45.771 ], 00:17:45.771 "product_name": "Logical Volume", 00:17:45.771 "block_size": 4096, 00:17:45.771 "num_blocks": 38912, 00:17:45.771 "uuid": "c20790cb-901b-4de5-9946-4377fc47f1c1", 00:17:45.771 "assigned_rate_limits": { 00:17:45.771 "rw_ios_per_sec": 0, 00:17:45.771 "rw_mbytes_per_sec": 0, 00:17:45.771 "r_mbytes_per_sec": 0, 00:17:45.771 "w_mbytes_per_sec": 0 00:17:45.771 }, 00:17:45.771 "claimed": false, 00:17:45.771 "zoned": false, 00:17:45.771 "supported_io_types": { 00:17:45.771 "read": true, 00:17:45.771 "write": true, 00:17:45.771 "unmap": true, 00:17:45.771 "write_zeroes": true, 00:17:45.771 "flush": false, 00:17:45.771 "reset": true, 00:17:45.771 "compare": false, 00:17:45.771 "compare_and_write": false, 00:17:45.771 "abort": false, 00:17:45.771 "nvme_admin": false, 00:17:45.771 "nvme_io": false 00:17:45.771 }, 00:17:45.771 "driver_specific": { 00:17:45.771 "lvol": { 00:17:45.771 "lvol_store_uuid": "5357fb5d-39b9-46e4-84e7-479f75c966c3", 00:17:45.771 "base_bdev": "aio_bdev", 00:17:45.771 "thin_provision": false, 00:17:45.771 "snapshot": false, 00:17:45.771 "clone": false, 00:17:45.771 "esnap_clone": false 00:17:45.771 } 00:17:45.771 } 00:17:45.771 } 00:17:45.771 ] 00:17:45.771 17:54:49 -- common/autotest_common.sh@895 -- # return 0 00:17:45.771 17:54:49 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:45.771 17:54:49 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:46.030 17:54:50 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:46.030 17:54:50 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:46.030 17:54:50 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:46.030 17:54:50 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:46.030 17:54:50 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c20790cb-901b-4de5-9946-4377fc47f1c1 00:17:46.290 17:54:50 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5357fb5d-39b9-46e4-84e7-479f75c966c3 00:17:46.549 17:54:50 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:46.808 17:54:50 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:46.808 00:17:46.808 real 0m17.788s 00:17:46.808 user 0m48.550s 00:17:46.808 sys 0m3.062s 00:17:46.808 17:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.808 17:54:50 -- common/autotest_common.sh@10 -- # set +x 00:17:46.808 ************************************ 00:17:46.808 END TEST lvs_grow_dirty 00:17:46.808 ************************************ 00:17:46.808 17:54:50 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:46.808 17:54:50 -- common/autotest_common.sh@796 -- # type=--id 00:17:46.808 17:54:50 -- common/autotest_common.sh@797 -- # id=0 00:17:46.808 17:54:50 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:46.808 17:54:50 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:46.808 17:54:50 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:46.808 17:54:50 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:46.808 17:54:50 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:46.809 17:54:50 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:46.809 nvmf_trace.0 00:17:46.809 17:54:50 -- common/autotest_common.sh@811 -- # return 0 00:17:46.809 17:54:51 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:46.809 17:54:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:46.809 17:54:51 -- nvmf/common.sh@116 -- # sync 00:17:46.809 17:54:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:46.809 17:54:51 -- nvmf/common.sh@119 -- # set +e 00:17:46.809 17:54:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:46.809 17:54:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:46.809 rmmod nvme_tcp 00:17:46.809 rmmod nvme_fabrics 00:17:46.809 rmmod nvme_keyring 00:17:46.809 17:54:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:47.068 17:54:51 -- nvmf/common.sh@123 -- # set -e 00:17:47.068 17:54:51 -- nvmf/common.sh@124 -- # return 0 00:17:47.068 17:54:51 -- nvmf/common.sh@477 -- # '[' -n 1655631 ']' 00:17:47.068 17:54:51 -- nvmf/common.sh@478 -- # killprocess 1655631 00:17:47.068 17:54:51 -- common/autotest_common.sh@926 -- # '[' -z 1655631 ']' 00:17:47.068 17:54:51 -- common/autotest_common.sh@930 -- # kill -0 1655631 00:17:47.068 17:54:51 -- common/autotest_common.sh@931 -- # uname 00:17:47.068 17:54:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.068 17:54:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1655631 00:17:47.068 17:54:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:47.068 17:54:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:47.068 17:54:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1655631' 00:17:47.068 killing process with pid 1655631 00:17:47.068 17:54:51 -- common/autotest_common.sh@945 -- # kill 1655631 00:17:47.068 17:54:51 -- common/autotest_common.sh@950 -- # wait 1655631 00:17:47.068 17:54:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:47.068 17:54:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:47.068 17:54:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:47.068 17:54:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.068 17:54:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:47.068 17:54:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.068 17:54:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.068 17:54:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.606 17:54:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:49.606 00:17:49.606 real 0m48.245s 00:17:49.606 user 1m13.633s 00:17:49.606 sys 0m11.264s 00:17:49.606 17:54:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.606 17:54:53 -- common/autotest_common.sh@10 -- # set +x 00:17:49.606 ************************************ 00:17:49.606 END TEST nvmf_lvs_grow 00:17:49.606 ************************************ 00:17:49.606 17:54:53 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.606 17:54:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:49.606 17:54:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.606 17:54:53 -- common/autotest_common.sh@10 -- # set +x 00:17:49.606 ************************************ 00:17:49.606 START TEST nvmf_bdev_io_wait 00:17:49.606 ************************************ 00:17:49.606 17:54:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:49.606 * Looking for test storage... 00:17:49.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.606 17:54:53 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.606 17:54:53 -- nvmf/common.sh@7 -- # uname -s 00:17:49.606 17:54:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.606 17:54:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.606 17:54:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.606 17:54:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.606 17:54:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.606 17:54:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.606 17:54:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.606 17:54:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.606 17:54:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.606 17:54:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.607 17:54:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:49.607 17:54:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:17:49.607 17:54:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.607 17:54:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.607 17:54:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.607 17:54:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.607 17:54:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.607 17:54:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.607 17:54:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.607 17:54:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.607 17:54:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.607 17:54:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.607 17:54:53 -- paths/export.sh@5 -- # export PATH 00:17:49.607 17:54:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.607 17:54:53 -- nvmf/common.sh@46 -- # : 0 00:17:49.607 17:54:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:49.607 17:54:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:49.607 17:54:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:49.607 17:54:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.607 17:54:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.607 17:54:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:49.607 17:54:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:49.607 17:54:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:49.607 17:54:53 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:49.607 17:54:53 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:49.607 17:54:53 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:49.607 17:54:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:49.607 17:54:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.607 17:54:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:49.607 17:54:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:49.607 17:54:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:49.607 17:54:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.607 17:54:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.607 17:54:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.607 17:54:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:49.607 17:54:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:49.607 17:54:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:49.607 17:54:53 -- common/autotest_common.sh@10 -- # set +x 00:17:57.741 17:55:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:57.741 17:55:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:57.741 17:55:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:57.741 17:55:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:57.741 17:55:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:57.741 17:55:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:57.741 17:55:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:57.741 17:55:00 -- nvmf/common.sh@294 -- # net_devs=() 00:17:57.741 17:55:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:57.741 17:55:00 -- nvmf/common.sh@295 -- # e810=() 00:17:57.741 17:55:00 -- nvmf/common.sh@295 -- # local -ga e810 00:17:57.741 17:55:00 -- nvmf/common.sh@296 -- # x722=() 00:17:57.741 17:55:00 -- nvmf/common.sh@296 -- # local -ga x722 00:17:57.741 17:55:00 -- nvmf/common.sh@297 -- # mlx=() 00:17:57.741 17:55:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:57.741 17:55:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.741 17:55:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.742 17:55:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.742 17:55:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.742 17:55:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:57.742 17:55:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:57.742 17:55:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:57.742 17:55:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.742 17:55:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:57.742 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:57.742 17:55:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:57.742 17:55:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:57.742 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:57.742 17:55:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:57.742 17:55:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.742 17:55:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.742 17:55:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.742 17:55:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.742 17:55:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:57.742 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:57.742 17:55:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.742 17:55:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:57.742 17:55:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.742 17:55:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:57.742 17:55:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.742 17:55:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:57.742 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:57.742 17:55:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.742 17:55:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:57.742 17:55:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:57.742 17:55:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:57.742 17:55:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.742 17:55:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.742 17:55:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.742 17:55:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:57.742 17:55:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:57.742 17:55:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:57.742 17:55:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:57.742 17:55:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:57.742 17:55:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.742 17:55:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:57.742 17:55:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:57.742 17:55:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:57.742 17:55:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:57.742 17:55:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:57.742 17:55:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:57.742 17:55:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:57.742 17:55:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:57.742 17:55:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:57.742 17:55:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.742 17:55:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:57.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:17:57.742 00:17:57.742 --- 10.0.0.2 ping statistics --- 00:17:57.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.742 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:17:57.742 17:55:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:17:57.742 00:17:57.742 --- 10.0.0.1 ping statistics --- 00:17:57.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.742 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:17:57.742 17:55:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.742 17:55:01 -- nvmf/common.sh@410 -- # return 0 00:17:57.742 17:55:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.742 17:55:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.742 17:55:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.742 17:55:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.742 17:55:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.742 17:55:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.742 17:55:01 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:57.742 17:55:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.742 17:55:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:57.742 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 17:55:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:57.742 17:55:01 -- nvmf/common.sh@469 -- # nvmfpid=1660667 00:17:57.742 17:55:01 -- nvmf/common.sh@470 -- # waitforlisten 1660667 00:17:57.742 17:55:01 -- common/autotest_common.sh@819 -- # '[' -z 1660667 ']' 00:17:57.742 17:55:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.742 17:55:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.742 17:55:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.742 17:55:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.742 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 [2024-07-22 17:55:01.383265] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:57.742 [2024-07-22 17:55:01.383319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.742 [2024-07-22 17:55:01.471110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.742 [2024-07-22 17:55:01.552405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.742 [2024-07-22 17:55:01.552562] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.742 [2024-07-22 17:55:01.552572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.742 [2024-07-22 17:55:01.552579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.742 [2024-07-22 17:55:01.552655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.742 [2024-07-22 17:55:01.552771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.742 [2024-07-22 17:55:01.552903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.742 [2024-07-22 17:55:01.552906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.742 17:55:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:57.742 17:55:01 -- common/autotest_common.sh@852 -- # return 0 00:17:57.742 17:55:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:57.742 17:55:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:57.742 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 17:55:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.742 17:55:01 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:57.742 17:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.742 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 17:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:57.742 17:55:01 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:57.742 17:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.742 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 17:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:57.742 17:55:01 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.742 17:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.742 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:57.742 [2024-07-22 17:55:01.990633] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.742 17:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:57.742 17:55:01 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.743 17:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:57.743 17:55:01 -- common/autotest_common.sh@10 -- # set +x 00:17:58.001 Malloc0 00:17:58.001 17:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.001 17:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.001 17:55:02 -- common/autotest_common.sh@10 -- # set +x 00:17:58.001 17:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.001 17:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.001 17:55:02 -- common/autotest_common.sh@10 -- # set +x 00:17:58.001 17:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.001 17:55:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.001 17:55:02 -- common/autotest_common.sh@10 -- # set +x 00:17:58.001 [2024-07-22 17:55:02.061510] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.001 17:55:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1660937 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@30 -- # READ_PID=1660940 00:17:58.001 17:55:02 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:58.001 17:55:02 -- nvmf/common.sh@520 -- # config=() 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:58.002 17:55:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:58.002 { 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme$subsystem", 00:17:58.002 "trtype": "$TEST_TRANSPORT", 00:17:58.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "$NVMF_PORT", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.002 "hdgst": ${hdgst:-false}, 00:17:58.002 "ddgst": ${ddgst:-false} 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 } 00:17:58.002 EOF 00:17:58.002 )") 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1660942 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # config=() 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:58.002 17:55:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1660946 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:58.002 { 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme$subsystem", 00:17:58.002 "trtype": "$TEST_TRANSPORT", 00:17:58.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "$NVMF_PORT", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.002 "hdgst": ${hdgst:-false}, 00:17:58.002 "ddgst": ${ddgst:-false} 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 } 00:17:58.002 EOF 00:17:58.002 )") 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@35 -- # sync 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # cat 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # config=() 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:58.002 17:55:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:58.002 { 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme$subsystem", 00:17:58.002 "trtype": "$TEST_TRANSPORT", 00:17:58.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "$NVMF_PORT", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.002 "hdgst": ${hdgst:-false}, 00:17:58.002 "ddgst": ${ddgst:-false} 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 } 00:17:58.002 EOF 00:17:58.002 )") 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # config=() 00:17:58.002 17:55:02 -- nvmf/common.sh@520 -- # local subsystem config 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # cat 00:17:58.002 17:55:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:58.002 { 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme$subsystem", 00:17:58.002 "trtype": "$TEST_TRANSPORT", 00:17:58.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "$NVMF_PORT", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.002 "hdgst": ${hdgst:-false}, 00:17:58.002 "ddgst": ${ddgst:-false} 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 } 00:17:58.002 EOF 00:17:58.002 )") 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # cat 00:17:58.002 17:55:02 -- target/bdev_io_wait.sh@37 -- # wait 1660937 00:17:58.002 17:55:02 -- nvmf/common.sh@542 -- # cat 00:17:58.002 17:55:02 -- nvmf/common.sh@544 -- # jq . 00:17:58.002 17:55:02 -- nvmf/common.sh@544 -- # jq . 00:17:58.002 17:55:02 -- nvmf/common.sh@544 -- # jq . 00:17:58.002 17:55:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:58.002 17:55:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme1", 00:17:58.002 "trtype": "tcp", 00:17:58.002 "traddr": "10.0.0.2", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "4420", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.002 "hdgst": false, 00:17:58.002 "ddgst": false 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 }' 00:17:58.002 17:55:02 -- nvmf/common.sh@544 -- # jq . 00:17:58.002 17:55:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:58.002 17:55:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme1", 00:17:58.002 "trtype": "tcp", 00:17:58.002 "traddr": "10.0.0.2", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "4420", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.002 "hdgst": false, 00:17:58.002 "ddgst": false 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 }' 00:17:58.002 17:55:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:58.002 17:55:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme1", 00:17:58.002 "trtype": "tcp", 00:17:58.002 "traddr": "10.0.0.2", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "4420", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.002 "hdgst": false, 00:17:58.002 "ddgst": false 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 }' 00:17:58.002 17:55:02 -- nvmf/common.sh@545 -- # IFS=, 00:17:58.002 17:55:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:58.002 "params": { 00:17:58.002 "name": "Nvme1", 00:17:58.002 "trtype": "tcp", 00:17:58.002 "traddr": "10.0.0.2", 00:17:58.002 "adrfam": "ipv4", 00:17:58.002 "trsvcid": "4420", 00:17:58.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.002 "hdgst": false, 00:17:58.002 "ddgst": false 00:17:58.002 }, 00:17:58.002 "method": "bdev_nvme_attach_controller" 00:17:58.002 }' 00:17:58.002 [2024-07-22 17:55:02.108608] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:58.002 [2024-07-22 17:55:02.108661] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:58.002 [2024-07-22 17:55:02.113727] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:58.002 [2024-07-22 17:55:02.113771] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:58.002 [2024-07-22 17:55:02.115581] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:58.002 [2024-07-22 17:55:02.115623] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:58.002 [2024-07-22 17:55:02.116431] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:58.002 [2024-07-22 17:55:02.116477] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:58.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.261 [2024-07-22 17:55:02.301768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.261 [2024-07-22 17:55:02.312124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.261 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.261 [2024-07-22 17:55:02.357285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:58.261 [2024-07-22 17:55:02.370005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.261 [2024-07-22 17:55:02.414950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.261 [2024-07-22 17:55:02.417140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:58.261 [2024-07-22 17:55:02.418561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.261 [2024-07-22 17:55:02.464909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:58.261 Running I/O for 1 seconds... 00:17:58.519 Running I/O for 1 seconds... 00:17:58.519 Running I/O for 1 seconds... 00:17:58.519 Running I/O for 1 seconds... 00:17:59.454 00:17:59.454 Latency(us) 00:17:59.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.454 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:59.454 Nvme1n1 : 1.00 207980.31 812.42 0.00 0.00 612.50 244.18 730.98 00:17:59.454 =================================================================================================================== 00:17:59.454 Total : 207980.31 812.42 0.00 0.00 612.50 244.18 730.98 00:17:59.454 00:17:59.454 Latency(us) 00:17:59.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.454 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:59.454 Nvme1n1 : 1.01 8993.79 35.13 0.00 0.00 14141.16 6175.51 23391.31 00:17:59.454 =================================================================================================================== 00:17:59.454 Total : 8993.79 35.13 0.00 0.00 14141.16 6175.51 23391.31 00:17:59.454 00:17:59.454 Latency(us) 00:17:59.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.454 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:59.454 Nvme1n1 : 1.00 19948.66 77.92 0.00 0.00 6400.53 3629.69 17442.66 00:17:59.454 =================================================================================================================== 00:17:59.454 Total : 19948.66 77.92 0.00 0.00 6400.53 3629.69 17442.66 00:17:59.712 00:17:59.712 Latency(us) 00:17:59.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.712 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:59.712 Nvme1n1 : 1.01 9040.54 35.31 0.00 0.00 14114.16 4688.34 29440.79 00:17:59.712 =================================================================================================================== 00:17:59.712 Total : 9040.54 35.31 0.00 0.00 14114.16 4688.34 29440.79 00:17:59.972 17:55:04 -- target/bdev_io_wait.sh@38 -- # wait 1660940 00:17:59.972 17:55:04 -- target/bdev_io_wait.sh@39 -- # wait 1660942 00:17:59.972 17:55:04 -- target/bdev_io_wait.sh@40 -- # wait 1660946 00:17:59.972 17:55:04 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:59.972 17:55:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:59.972 17:55:04 -- common/autotest_common.sh@10 -- # set +x 00:17:59.972 17:55:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:59.972 17:55:04 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:59.972 17:55:04 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:59.972 17:55:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:59.972 17:55:04 -- nvmf/common.sh@116 -- # sync 00:17:59.972 17:55:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.972 17:55:04 -- nvmf/common.sh@119 -- # set +e 00:17:59.972 17:55:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.972 17:55:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.972 rmmod nvme_tcp 00:17:59.972 rmmod nvme_fabrics 00:17:59.972 rmmod nvme_keyring 00:17:59.972 17:55:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.972 17:55:04 -- nvmf/common.sh@123 -- # set -e 00:17:59.972 17:55:04 -- nvmf/common.sh@124 -- # return 0 00:17:59.972 17:55:04 -- nvmf/common.sh@477 -- # '[' -n 1660667 ']' 00:17:59.972 17:55:04 -- nvmf/common.sh@478 -- # killprocess 1660667 00:17:59.972 17:55:04 -- common/autotest_common.sh@926 -- # '[' -z 1660667 ']' 00:17:59.972 17:55:04 -- common/autotest_common.sh@930 -- # kill -0 1660667 00:17:59.972 17:55:04 -- common/autotest_common.sh@931 -- # uname 00:17:59.972 17:55:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.972 17:55:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1660667 00:17:59.972 17:55:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.972 17:55:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.972 17:55:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1660667' 00:17:59.972 killing process with pid 1660667 00:17:59.972 17:55:04 -- common/autotest_common.sh@945 -- # kill 1660667 00:17:59.972 17:55:04 -- common/autotest_common.sh@950 -- # wait 1660667 00:18:00.240 17:55:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:00.240 17:55:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:00.240 17:55:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:00.240 17:55:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.240 17:55:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:00.240 17:55:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.240 17:55:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.240 17:55:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.144 17:55:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:02.144 00:18:02.144 real 0m12.960s 00:18:02.144 user 0m18.910s 00:18:02.144 sys 0m7.185s 00:18:02.144 17:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.144 17:55:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.144 ************************************ 00:18:02.144 END TEST nvmf_bdev_io_wait 00:18:02.144 ************************************ 00:18:02.144 17:55:06 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:02.144 17:55:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:02.144 17:55:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.144 17:55:06 -- common/autotest_common.sh@10 -- # set +x 00:18:02.144 ************************************ 00:18:02.144 START TEST nvmf_queue_depth 00:18:02.144 ************************************ 00:18:02.144 17:55:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:02.403 * Looking for test storage... 00:18:02.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.403 17:55:06 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.403 17:55:06 -- nvmf/common.sh@7 -- # uname -s 00:18:02.403 17:55:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.403 17:55:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.403 17:55:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.403 17:55:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.403 17:55:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.403 17:55:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.403 17:55:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.403 17:55:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.403 17:55:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.403 17:55:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.403 17:55:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:02.403 17:55:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:02.403 17:55:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.403 17:55:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.403 17:55:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.403 17:55:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.403 17:55:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.403 17:55:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.403 17:55:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.403 17:55:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.403 17:55:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.403 17:55:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.404 17:55:06 -- paths/export.sh@5 -- # export PATH 00:18:02.404 17:55:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.404 17:55:06 -- nvmf/common.sh@46 -- # : 0 00:18:02.404 17:55:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.404 17:55:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.404 17:55:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.404 17:55:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.404 17:55:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.404 17:55:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.404 17:55:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.404 17:55:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.404 17:55:06 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:02.404 17:55:06 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:02.404 17:55:06 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.404 17:55:06 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:02.404 17:55:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.404 17:55:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.404 17:55:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.404 17:55:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.404 17:55:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.404 17:55:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.404 17:55:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.404 17:55:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.404 17:55:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:02.404 17:55:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:02.404 17:55:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:02.404 17:55:06 -- common/autotest_common.sh@10 -- # set +x 00:18:10.579 17:55:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:10.579 17:55:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:10.579 17:55:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:10.579 17:55:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:10.579 17:55:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:10.579 17:55:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:10.579 17:55:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:10.579 17:55:14 -- nvmf/common.sh@294 -- # net_devs=() 00:18:10.579 17:55:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:10.579 17:55:14 -- nvmf/common.sh@295 -- # e810=() 00:18:10.579 17:55:14 -- nvmf/common.sh@295 -- # local -ga e810 00:18:10.579 17:55:14 -- nvmf/common.sh@296 -- # x722=() 00:18:10.579 17:55:14 -- nvmf/common.sh@296 -- # local -ga x722 00:18:10.579 17:55:14 -- nvmf/common.sh@297 -- # mlx=() 00:18:10.579 17:55:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:10.579 17:55:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.579 17:55:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:10.579 17:55:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:10.579 17:55:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:10.579 17:55:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.579 17:55:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:10.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:10.579 17:55:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.579 17:55:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:10.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:10.579 17:55:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:10.579 17:55:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.579 17:55:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.579 17:55:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.579 17:55:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.579 17:55:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:10.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:10.579 17:55:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.579 17:55:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.579 17:55:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.579 17:55:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.579 17:55:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.579 17:55:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:10.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:10.579 17:55:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.579 17:55:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:10.579 17:55:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:10.579 17:55:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:10.579 17:55:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:10.579 17:55:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.579 17:55:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.579 17:55:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.579 17:55:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:10.579 17:55:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.579 17:55:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.579 17:55:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:10.579 17:55:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.579 17:55:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.579 17:55:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:10.579 17:55:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:10.579 17:55:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.579 17:55:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.579 17:55:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.579 17:55:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.579 17:55:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:10.579 17:55:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.579 17:55:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.579 17:55:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.579 17:55:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:10.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:18:10.579 00:18:10.579 --- 10.0.0.2 ping statistics --- 00:18:10.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.579 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:18:10.579 17:55:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:18:10.579 00:18:10.579 --- 10.0.0.1 ping statistics --- 00:18:10.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.579 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:18:10.580 17:55:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.580 17:55:14 -- nvmf/common.sh@410 -- # return 0 00:18:10.580 17:55:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.580 17:55:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.580 17:55:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.580 17:55:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.580 17:55:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.580 17:55:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.580 17:55:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:10.580 17:55:14 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:10.580 17:55:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.580 17:55:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:10.580 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:18:10.580 17:55:14 -- nvmf/common.sh@469 -- # nvmfpid=1665562 00:18:10.580 17:55:14 -- nvmf/common.sh@470 -- # waitforlisten 1665562 00:18:10.580 17:55:14 -- common/autotest_common.sh@819 -- # '[' -z 1665562 ']' 00:18:10.580 17:55:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.580 17:55:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.580 17:55:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.580 17:55:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.580 17:55:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.580 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:18:10.580 [2024-07-22 17:55:14.819244] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:10.580 [2024-07-22 17:55:14.819308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.839 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.839 [2024-07-22 17:55:14.895287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.839 [2024-07-22 17:55:14.964138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.839 [2024-07-22 17:55:14.964262] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.839 [2024-07-22 17:55:14.964270] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.839 [2024-07-22 17:55:14.964277] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.839 [2024-07-22 17:55:14.964302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.406 17:55:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.406 17:55:15 -- common/autotest_common.sh@852 -- # return 0 00:18:11.406 17:55:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.407 17:55:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:11.407 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 17:55:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.666 17:55:15 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.666 17:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.666 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 [2024-07-22 17:55:15.698655] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.666 17:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.666 17:55:15 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.666 17:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.666 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 Malloc0 00:18:11.666 17:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.666 17:55:15 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.666 17:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.666 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 17:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.666 17:55:15 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.666 17:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.666 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 17:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.666 17:55:15 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.666 17:55:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.666 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 [2024-07-22 17:55:15.763295] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.666 17:55:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.666 17:55:15 -- target/queue_depth.sh@30 -- # bdevperf_pid=1665849 00:18:11.666 17:55:15 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.666 17:55:15 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:11.666 17:55:15 -- target/queue_depth.sh@33 -- # waitforlisten 1665849 /var/tmp/bdevperf.sock 00:18:11.666 17:55:15 -- common/autotest_common.sh@819 -- # '[' -z 1665849 ']' 00:18:11.666 17:55:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.666 17:55:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.666 17:55:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.666 17:55:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.666 17:55:15 -- common/autotest_common.sh@10 -- # set +x 00:18:11.666 [2024-07-22 17:55:15.811696] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:11.666 [2024-07-22 17:55:15.811742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665849 ] 00:18:11.666 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.666 [2024-07-22 17:55:15.891662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.926 [2024-07-22 17:55:15.950918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.495 17:55:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.495 17:55:16 -- common/autotest_common.sh@852 -- # return 0 00:18:12.495 17:55:16 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:12.495 17:55:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.495 17:55:16 -- common/autotest_common.sh@10 -- # set +x 00:18:12.755 NVMe0n1 00:18:12.755 17:55:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.755 17:55:16 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.755 Running I/O for 10 seconds... 00:18:22.746 00:18:22.746 Latency(us) 00:18:22.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.746 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:22.746 Verification LBA range: start 0x0 length 0x4000 00:18:22.746 NVMe0n1 : 10.06 15069.70 58.87 0.00 0.00 67724.57 13409.67 50613.96 00:18:22.746 =================================================================================================================== 00:18:22.746 Total : 15069.70 58.87 0.00 0.00 67724.57 13409.67 50613.96 00:18:22.746 0 00:18:22.746 17:55:27 -- target/queue_depth.sh@39 -- # killprocess 1665849 00:18:22.746 17:55:27 -- common/autotest_common.sh@926 -- # '[' -z 1665849 ']' 00:18:22.746 17:55:27 -- common/autotest_common.sh@930 -- # kill -0 1665849 00:18:22.746 17:55:27 -- common/autotest_common.sh@931 -- # uname 00:18:22.746 17:55:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:22.746 17:55:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1665849 00:18:23.007 17:55:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:23.007 17:55:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:23.007 17:55:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1665849' 00:18:23.007 killing process with pid 1665849 00:18:23.007 17:55:27 -- common/autotest_common.sh@945 -- # kill 1665849 00:18:23.007 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.007 00:18:23.007 Latency(us) 00:18:23.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.007 =================================================================================================================== 00:18:23.007 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.007 17:55:27 -- common/autotest_common.sh@950 -- # wait 1665849 00:18:23.007 17:55:27 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:23.007 17:55:27 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:23.007 17:55:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:23.007 17:55:27 -- nvmf/common.sh@116 -- # sync 00:18:23.007 17:55:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:23.007 17:55:27 -- nvmf/common.sh@119 -- # set +e 00:18:23.007 17:55:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:23.007 17:55:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:23.007 rmmod nvme_tcp 00:18:23.007 rmmod nvme_fabrics 00:18:23.007 rmmod nvme_keyring 00:18:23.007 17:55:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:23.007 17:55:27 -- nvmf/common.sh@123 -- # set -e 00:18:23.007 17:55:27 -- nvmf/common.sh@124 -- # return 0 00:18:23.007 17:55:27 -- nvmf/common.sh@477 -- # '[' -n 1665562 ']' 00:18:23.007 17:55:27 -- nvmf/common.sh@478 -- # killprocess 1665562 00:18:23.007 17:55:27 -- common/autotest_common.sh@926 -- # '[' -z 1665562 ']' 00:18:23.007 17:55:27 -- common/autotest_common.sh@930 -- # kill -0 1665562 00:18:23.007 17:55:27 -- common/autotest_common.sh@931 -- # uname 00:18:23.007 17:55:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:23.268 17:55:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1665562 00:18:23.268 17:55:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:23.268 17:55:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:23.268 17:55:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1665562' 00:18:23.268 killing process with pid 1665562 00:18:23.268 17:55:27 -- common/autotest_common.sh@945 -- # kill 1665562 00:18:23.268 17:55:27 -- common/autotest_common.sh@950 -- # wait 1665562 00:18:23.268 17:55:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:23.268 17:55:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:23.268 17:55:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:23.268 17:55:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.268 17:55:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:23.268 17:55:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.268 17:55:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.268 17:55:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.812 17:55:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:25.812 00:18:25.812 real 0m23.138s 00:18:25.812 user 0m25.627s 00:18:25.812 sys 0m7.582s 00:18:25.812 17:55:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.812 17:55:29 -- common/autotest_common.sh@10 -- # set +x 00:18:25.812 ************************************ 00:18:25.812 END TEST nvmf_queue_depth 00:18:25.812 ************************************ 00:18:25.812 17:55:29 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:25.812 17:55:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:25.812 17:55:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.812 17:55:29 -- common/autotest_common.sh@10 -- # set +x 00:18:25.812 ************************************ 00:18:25.812 START TEST nvmf_multipath 00:18:25.812 ************************************ 00:18:25.812 17:55:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:25.812 * Looking for test storage... 00:18:25.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.812 17:55:29 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.812 17:55:29 -- nvmf/common.sh@7 -- # uname -s 00:18:25.812 17:55:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.812 17:55:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.812 17:55:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.812 17:55:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.812 17:55:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.812 17:55:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.812 17:55:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.812 17:55:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.812 17:55:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.812 17:55:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.812 17:55:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:25.812 17:55:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:25.812 17:55:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.812 17:55:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.812 17:55:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.812 17:55:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.812 17:55:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.812 17:55:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.812 17:55:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.812 17:55:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.812 17:55:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.812 17:55:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.812 17:55:29 -- paths/export.sh@5 -- # export PATH 00:18:25.812 17:55:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.812 17:55:29 -- nvmf/common.sh@46 -- # : 0 00:18:25.812 17:55:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.812 17:55:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.812 17:55:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.812 17:55:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.812 17:55:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.812 17:55:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.812 17:55:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.812 17:55:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.812 17:55:29 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.812 17:55:29 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.812 17:55:29 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:25.812 17:55:29 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.812 17:55:29 -- target/multipath.sh@43 -- # nvmftestinit 00:18:25.812 17:55:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:25.812 17:55:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.812 17:55:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.812 17:55:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.812 17:55:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.812 17:55:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.812 17:55:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.812 17:55:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.812 17:55:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.812 17:55:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.812 17:55:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.812 17:55:29 -- common/autotest_common.sh@10 -- # set +x 00:18:33.957 17:55:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:33.957 17:55:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:33.957 17:55:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:33.957 17:55:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:33.957 17:55:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:33.957 17:55:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:33.957 17:55:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:33.957 17:55:37 -- nvmf/common.sh@294 -- # net_devs=() 00:18:33.957 17:55:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:33.957 17:55:37 -- nvmf/common.sh@295 -- # e810=() 00:18:33.957 17:55:37 -- nvmf/common.sh@295 -- # local -ga e810 00:18:33.957 17:55:37 -- nvmf/common.sh@296 -- # x722=() 00:18:33.957 17:55:37 -- nvmf/common.sh@296 -- # local -ga x722 00:18:33.957 17:55:37 -- nvmf/common.sh@297 -- # mlx=() 00:18:33.957 17:55:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:33.957 17:55:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.957 17:55:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:33.957 17:55:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:33.957 17:55:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:33.957 17:55:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:33.957 17:55:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:33.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:33.957 17:55:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:33.957 17:55:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:33.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:33.957 17:55:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:33.957 17:55:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:33.957 17:55:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.957 17:55:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:33.957 17:55:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.957 17:55:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:33.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:33.957 17:55:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.957 17:55:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:33.957 17:55:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.957 17:55:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:33.957 17:55:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.957 17:55:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:33.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:33.957 17:55:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.957 17:55:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:33.957 17:55:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:33.957 17:55:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:33.957 17:55:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:33.957 17:55:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.957 17:55:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.957 17:55:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.957 17:55:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:33.957 17:55:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.957 17:55:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.957 17:55:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:33.957 17:55:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.957 17:55:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.957 17:55:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:33.957 17:55:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:33.957 17:55:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.957 17:55:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.957 17:55:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.957 17:55:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.957 17:55:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:33.957 17:55:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.957 17:55:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.957 17:55:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.957 17:55:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:33.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:18:33.957 00:18:33.957 --- 10.0.0.2 ping statistics --- 00:18:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.958 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:18:33.958 17:55:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:18:33.958 00:18:33.958 --- 10.0.0.1 ping statistics --- 00:18:33.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.958 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:18:33.958 17:55:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.958 17:55:37 -- nvmf/common.sh@410 -- # return 0 00:18:33.958 17:55:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:33.958 17:55:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.958 17:55:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:33.958 17:55:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:33.958 17:55:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.958 17:55:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:33.958 17:55:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:33.958 17:55:37 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:33.958 17:55:37 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:33.958 only one NIC for nvmf test 00:18:33.958 17:55:37 -- target/multipath.sh@47 -- # nvmftestfini 00:18:33.958 17:55:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:33.958 17:55:37 -- nvmf/common.sh@116 -- # sync 00:18:33.958 17:55:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:33.958 17:55:37 -- nvmf/common.sh@119 -- # set +e 00:18:33.958 17:55:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:33.958 17:55:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:33.958 rmmod nvme_tcp 00:18:33.958 rmmod nvme_fabrics 00:18:33.958 rmmod nvme_keyring 00:18:33.958 17:55:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:33.958 17:55:37 -- nvmf/common.sh@123 -- # set -e 00:18:33.958 17:55:37 -- nvmf/common.sh@124 -- # return 0 00:18:33.958 17:55:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:33.958 17:55:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:33.958 17:55:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:33.958 17:55:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:33.958 17:55:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.958 17:55:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:33.958 17:55:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.958 17:55:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.958 17:55:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.873 17:55:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:35.873 17:55:39 -- target/multipath.sh@48 -- # exit 0 00:18:35.873 17:55:39 -- target/multipath.sh@1 -- # nvmftestfini 00:18:35.873 17:55:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.873 17:55:39 -- nvmf/common.sh@116 -- # sync 00:18:35.873 17:55:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.873 17:55:39 -- nvmf/common.sh@119 -- # set +e 00:18:35.873 17:55:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.873 17:55:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.873 17:55:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.873 17:55:39 -- nvmf/common.sh@123 -- # set -e 00:18:35.873 17:55:39 -- nvmf/common.sh@124 -- # return 0 00:18:35.873 17:55:39 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:35.873 17:55:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:35.873 17:55:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:35.873 17:55:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:35.873 17:55:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.873 17:55:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:35.873 17:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.873 17:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.873 17:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.873 17:55:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:35.873 00:18:35.873 real 0m10.100s 00:18:35.873 user 0m2.235s 00:18:35.873 sys 0m5.763s 00:18:35.874 17:55:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.874 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:18:35.874 ************************************ 00:18:35.874 END TEST nvmf_multipath 00:18:35.874 ************************************ 00:18:35.874 17:55:39 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:35.874 17:55:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:35.874 17:55:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:35.874 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:18:35.874 ************************************ 00:18:35.874 START TEST nvmf_zcopy 00:18:35.874 ************************************ 00:18:35.874 17:55:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:35.874 * Looking for test storage... 00:18:35.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.874 17:55:39 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.874 17:55:39 -- nvmf/common.sh@7 -- # uname -s 00:18:35.874 17:55:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.874 17:55:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.874 17:55:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.874 17:55:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.874 17:55:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.874 17:55:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.874 17:55:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.874 17:55:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.874 17:55:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.874 17:55:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.874 17:55:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:35.874 17:55:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:18:35.874 17:55:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.874 17:55:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.874 17:55:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.874 17:55:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.874 17:55:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.874 17:55:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.874 17:55:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.874 17:55:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.874 17:55:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.874 17:55:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.874 17:55:39 -- paths/export.sh@5 -- # export PATH 00:18:35.874 17:55:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.874 17:55:39 -- nvmf/common.sh@46 -- # : 0 00:18:35.874 17:55:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:35.874 17:55:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:35.874 17:55:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:35.874 17:55:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.874 17:55:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.874 17:55:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:35.874 17:55:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:35.874 17:55:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:35.874 17:55:39 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:35.874 17:55:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:35.874 17:55:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.874 17:55:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:35.874 17:55:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:35.874 17:55:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:35.874 17:55:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.874 17:55:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.874 17:55:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.874 17:55:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:35.874 17:55:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:35.874 17:55:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:35.874 17:55:39 -- common/autotest_common.sh@10 -- # set +x 00:18:44.016 17:55:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:44.016 17:55:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:44.016 17:55:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:44.016 17:55:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:44.016 17:55:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:44.016 17:55:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:44.016 17:55:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:44.016 17:55:47 -- nvmf/common.sh@294 -- # net_devs=() 00:18:44.016 17:55:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:44.016 17:55:47 -- nvmf/common.sh@295 -- # e810=() 00:18:44.016 17:55:47 -- nvmf/common.sh@295 -- # local -ga e810 00:18:44.016 17:55:47 -- nvmf/common.sh@296 -- # x722=() 00:18:44.016 17:55:47 -- nvmf/common.sh@296 -- # local -ga x722 00:18:44.016 17:55:47 -- nvmf/common.sh@297 -- # mlx=() 00:18:44.016 17:55:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:44.016 17:55:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.016 17:55:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.017 17:55:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.017 17:55:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.017 17:55:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.017 17:55:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:44.017 17:55:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:44.017 17:55:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:44.017 17:55:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:44.017 17:55:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:44.017 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:44.017 17:55:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:44.017 17:55:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:44.017 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:44.017 17:55:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:44.017 17:55:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:44.017 17:55:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.017 17:55:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:44.017 17:55:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.017 17:55:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:44.017 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:44.017 17:55:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.017 17:55:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:44.017 17:55:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.017 17:55:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:44.017 17:55:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.017 17:55:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:44.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:44.017 17:55:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.017 17:55:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:44.017 17:55:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:44.017 17:55:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:44.017 17:55:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:44.017 17:55:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.017 17:55:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.017 17:55:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.017 17:55:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:44.017 17:55:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.017 17:55:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.017 17:55:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:44.017 17:55:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.017 17:55:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.017 17:55:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:44.017 17:55:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:44.017 17:55:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.017 17:55:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.017 17:55:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.017 17:55:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.017 17:55:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:44.017 17:55:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.017 17:55:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.017 17:55:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.017 17:55:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:44.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:18:44.017 00:18:44.017 --- 10.0.0.2 ping statistics --- 00:18:44.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.017 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:18:44.017 17:55:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:44.017 00:18:44.017 --- 10.0.0.1 ping statistics --- 00:18:44.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.017 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:44.017 17:55:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.017 17:55:48 -- nvmf/common.sh@410 -- # return 0 00:18:44.017 17:55:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:44.017 17:55:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.017 17:55:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:44.017 17:55:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:44.017 17:55:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.017 17:55:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:44.017 17:55:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:44.017 17:55:48 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:44.017 17:55:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:44.017 17:55:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:44.017 17:55:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 17:55:48 -- nvmf/common.sh@469 -- # nvmfpid=1676414 00:18:44.017 17:55:48 -- nvmf/common.sh@470 -- # waitforlisten 1676414 00:18:44.017 17:55:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.017 17:55:48 -- common/autotest_common.sh@819 -- # '[' -z 1676414 ']' 00:18:44.017 17:55:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.017 17:55:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.017 17:55:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.017 17:55:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.017 17:55:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.017 [2024-07-22 17:55:48.123121] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:44.017 [2024-07-22 17:55:48.123185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.017 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.017 [2024-07-22 17:55:48.196475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.017 [2024-07-22 17:55:48.264575] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:44.017 [2024-07-22 17:55:48.264691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.017 [2024-07-22 17:55:48.264699] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.017 [2024-07-22 17:55:48.264705] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.017 [2024-07-22 17:55:48.264722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.960 17:55:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.960 17:55:48 -- common/autotest_common.sh@852 -- # return 0 00:18:44.960 17:55:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:44.960 17:55:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:44.960 17:55:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 17:55:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.960 17:55:48 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:44.960 17:55:48 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:44.960 17:55:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.960 17:55:48 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 [2024-07-22 17:55:48.998553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.960 17:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.960 17:55:49 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:44.960 17:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.960 17:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 17:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.960 17:55:49 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.960 17:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.960 17:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 [2024-07-22 17:55:49.022720] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.960 17:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.960 17:55:49 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:44.960 17:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.960 17:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 17:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.960 17:55:49 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:44.960 17:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.960 17:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 malloc0 00:18:44.960 17:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.960 17:55:49 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:44.960 17:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.960 17:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 17:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.960 17:55:49 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:44.960 17:55:49 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:44.960 17:55:49 -- nvmf/common.sh@520 -- # config=() 00:18:44.961 17:55:49 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.961 17:55:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.961 17:55:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.961 { 00:18:44.961 "params": { 00:18:44.961 "name": "Nvme$subsystem", 00:18:44.961 "trtype": "$TEST_TRANSPORT", 00:18:44.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.961 "adrfam": "ipv4", 00:18:44.961 "trsvcid": "$NVMF_PORT", 00:18:44.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.961 "hdgst": ${hdgst:-false}, 00:18:44.961 "ddgst": ${ddgst:-false} 00:18:44.961 }, 00:18:44.961 "method": "bdev_nvme_attach_controller" 00:18:44.961 } 00:18:44.961 EOF 00:18:44.961 )") 00:18:44.961 17:55:49 -- nvmf/common.sh@542 -- # cat 00:18:44.961 17:55:49 -- nvmf/common.sh@544 -- # jq . 00:18:44.961 17:55:49 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.961 17:55:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.961 "params": { 00:18:44.961 "name": "Nvme1", 00:18:44.961 "trtype": "tcp", 00:18:44.961 "traddr": "10.0.0.2", 00:18:44.961 "adrfam": "ipv4", 00:18:44.961 "trsvcid": "4420", 00:18:44.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.961 "hdgst": false, 00:18:44.961 "ddgst": false 00:18:44.961 }, 00:18:44.961 "method": "bdev_nvme_attach_controller" 00:18:44.961 }' 00:18:44.961 [2024-07-22 17:55:49.111856] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:44.961 [2024-07-22 17:55:49.111903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1676733 ] 00:18:44.961 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.961 [2024-07-22 17:55:49.190655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.221 [2024-07-22 17:55:49.250587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.221 Running I/O for 10 seconds... 00:18:55.224 00:18:55.224 Latency(us) 00:18:55.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.224 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:55.224 Verification LBA range: start 0x0 length 0x1000 00:18:55.224 Nvme1n1 : 10.01 10199.46 79.68 0.00 0.00 12520.73 1562.78 18249.26 00:18:55.224 =================================================================================================================== 00:18:55.224 Total : 10199.46 79.68 0.00 0.00 12520.73 1562.78 18249.26 00:18:55.485 17:55:59 -- target/zcopy.sh@39 -- # perfpid=1678312 00:18:55.485 17:55:59 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:55.485 17:55:59 -- common/autotest_common.sh@10 -- # set +x 00:18:55.485 17:55:59 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:55.485 17:55:59 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:55.485 17:55:59 -- nvmf/common.sh@520 -- # config=() 00:18:55.485 17:55:59 -- nvmf/common.sh@520 -- # local subsystem config 00:18:55.485 17:55:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:55.485 17:55:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:55.485 { 00:18:55.485 "params": { 00:18:55.485 "name": "Nvme$subsystem", 00:18:55.485 "trtype": "$TEST_TRANSPORT", 00:18:55.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:55.485 "adrfam": "ipv4", 00:18:55.485 "trsvcid": "$NVMF_PORT", 00:18:55.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:55.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:55.485 "hdgst": ${hdgst:-false}, 00:18:55.485 "ddgst": ${ddgst:-false} 00:18:55.485 }, 00:18:55.485 "method": "bdev_nvme_attach_controller" 00:18:55.485 } 00:18:55.485 EOF 00:18:55.485 )") 00:18:55.485 [2024-07-22 17:55:59.554932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.485 [2024-07-22 17:55:59.554963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.485 17:55:59 -- nvmf/common.sh@542 -- # cat 00:18:55.485 17:55:59 -- nvmf/common.sh@544 -- # jq . 00:18:55.485 17:55:59 -- nvmf/common.sh@545 -- # IFS=, 00:18:55.485 17:55:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:55.485 "params": { 00:18:55.485 "name": "Nvme1", 00:18:55.485 "trtype": "tcp", 00:18:55.485 "traddr": "10.0.0.2", 00:18:55.485 "adrfam": "ipv4", 00:18:55.485 "trsvcid": "4420", 00:18:55.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.485 "hdgst": false, 00:18:55.485 "ddgst": false 00:18:55.485 }, 00:18:55.485 "method": "bdev_nvme_attach_controller" 00:18:55.485 }' 00:18:55.486 [2024-07-22 17:55:59.566935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.566946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.578966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.578976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.590997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.591007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.601261] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:55.486 [2024-07-22 17:55:59.601328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678312 ] 00:18:55.486 [2024-07-22 17:55:59.603030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.603039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.615061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.615070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.627092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.627101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.486 [2024-07-22 17:55:59.639125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.639135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.651157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.651167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.663188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.663197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.675220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.675229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.686253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.486 [2024-07-22 17:55:59.687252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.687261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.699287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.699298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.711317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.711328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.723354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.723370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.735384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.735399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.745444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.486 [2024-07-22 17:55:59.747410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.747419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.486 [2024-07-22 17:55:59.759449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.486 [2024-07-22 17:55:59.759464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.771480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.771493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.783505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.783516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.795537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.795548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.807569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.807578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.819613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.819628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.831641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.831652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.843675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.843686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.855707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.855716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.867742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.867751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.879777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.879787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.891809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.891820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.903840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.903849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.915874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.915883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.927905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.927914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.939935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.939947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.951965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.951974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.963999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.964008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.976032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.976042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:55:59.988065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:55:59.988075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:56:00.000100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:56:00.000109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.748 [2024-07-22 17:56:00.012135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.748 [2024-07-22 17:56:00.012146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.024166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.024179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.073762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.073779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.084334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.084345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 Running I/O for 5 seconds... 00:18:56.010 [2024-07-22 17:56:00.101611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.101630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.117115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.117134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.128585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.128603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.144127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.144145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.160509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.160526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.175962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.175980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.190455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.190473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.201677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.201695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.217608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.217626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.233754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.233771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.250118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.250140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.266866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.266883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.010 [2024-07-22 17:56:00.282359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.010 [2024-07-22 17:56:00.282377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.297201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.297217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.312887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.312904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.327180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.327197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.342845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.342862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.358783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.358800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.374467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.374485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.390296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.390313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.404838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.404855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.416094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.416111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.431359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.431376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.447468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.447486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.272 [2024-07-22 17:56:00.459297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.272 [2024-07-22 17:56:00.459314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.273 [2024-07-22 17:56:00.474217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.273 [2024-07-22 17:56:00.474234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.273 [2024-07-22 17:56:00.490697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.273 [2024-07-22 17:56:00.490715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.273 [2024-07-22 17:56:00.507108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.273 [2024-07-22 17:56:00.507126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.273 [2024-07-22 17:56:00.518139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.273 [2024-07-22 17:56:00.518156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.273 [2024-07-22 17:56:00.533932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.273 [2024-07-22 17:56:00.533953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.549358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.549376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.563568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.563586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.578904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.578921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.594665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.594683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.609166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.609184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.620762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.620779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.636520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.636536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.653520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.534 [2024-07-22 17:56:00.653537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.534 [2024-07-22 17:56:00.669000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.669017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.683954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.683971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.700696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.700713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.716490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.716507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.731828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.731846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.747151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.747168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.761954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.761971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.773213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.773230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.789235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.789252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.535 [2024-07-22 17:56:00.804636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.535 [2024-07-22 17:56:00.804654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.820265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.820287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.835913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.835930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.851327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.851345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.867125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.867142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.878789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.878809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.894479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.894496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.910760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.910777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.922355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.922372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.937970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.937986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.953548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.953565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.796 [2024-07-22 17:56:00.969043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.796 [2024-07-22 17:56:00.969060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:00.984959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:00.984976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:00.996924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:00.996941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:01.012025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:01.012042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:01.025568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:01.025586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:01.041416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:01.041433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:01.056745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:01.056763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.797 [2024-07-22 17:56:01.071662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.797 [2024-07-22 17:56:01.071680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.082961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.082978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.098872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.098897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.114605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.114623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.129094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.129113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.140739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.140756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.156192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.156210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.137 [2024-07-22 17:56:01.172827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.137 [2024-07-22 17:56:01.172844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.188796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.188813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.200000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.200017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.215785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.215802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.231517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.231534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.245797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.245814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.257231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.257247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.272994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.273012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.288694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.288712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.299778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.299796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.315244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.315261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.330837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.330854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.345458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.345475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.357295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.357313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.372901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.372918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.388367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.388385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.138 [2024-07-22 17:56:01.403339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.138 [2024-07-22 17:56:01.403363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.420031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.420048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.435884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.435902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.450725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.450743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.462062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.462079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.477798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.477815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.493861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.493878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.509107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.509124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.525530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.525548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.536700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.536717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.552547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.552565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.568593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.568611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.580029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.580047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.595668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.595685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.611602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.611620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.626714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.626731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.642961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.642979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.658504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.658522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.673306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.673324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.421 [2024-07-22 17:56:01.685040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.421 [2024-07-22 17:56:01.685058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.682 [2024-07-22 17:56:01.700766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.682 [2024-07-22 17:56:01.700783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.682 [2024-07-22 17:56:01.716591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.682 [2024-07-22 17:56:01.716608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.731562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.731580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.748090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.748108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.759322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.759339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.775533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.775550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.791093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.791111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.806268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.806286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.822247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.822264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.837338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.837360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.852421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.852440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.867853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.867870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.884167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.884184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.895400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.895417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.910845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.910861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.927041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.927059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.938439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.938455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.683 [2024-07-22 17:56:01.954076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.683 [2024-07-22 17:56:01.954093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:01.969832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:01.969850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:01.984223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:01.984241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:01.996139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:01.996156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.011494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.011511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.028074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.028091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.039339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.039361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.055017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.055034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.070705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.070723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.085451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.085467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.101399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.101417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.115901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.115918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.126900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.126917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.142862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.142879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.158981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.158998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.170431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.170449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.186060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.186077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.202029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.202046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.944 [2024-07-22 17:56:02.212972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.944 [2024-07-22 17:56:02.212989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.228593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.228611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.244498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.244516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.259466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.259482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.275305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.275322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.290948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.290964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.306660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.306678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.320836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.320853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.336453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.336470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.351909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.351926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.366487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.366504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.382113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.382130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.396642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.396659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.407872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.407889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.423412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.423428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.440010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.440027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.451084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.451101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.205 [2024-07-22 17:56:02.466696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.205 [2024-07-22 17:56:02.466713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.482371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.482392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.497829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.497846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.514017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.514035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.530490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.530508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.546591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.546608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.557768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.557785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.573427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.573443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.589363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.589380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.603516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.603533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.618796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.618813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.634614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.634632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.650016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.650034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.661660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.661677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.677295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.677312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.693059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.693076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.707157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.707174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.722517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.722533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.467 [2024-07-22 17:56:02.738576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.467 [2024-07-22 17:56:02.738593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.754113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.754130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.770062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.770082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.786069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.786086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.802767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.802784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.818660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.818678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.830196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.830214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.846027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.846045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.861797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.861814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.876101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.876118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.887230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.887247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.903029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.903047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.918251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.918268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.933203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.933221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.944799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.944816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.960757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.960774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.976326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.976344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.728 [2024-07-22 17:56:02.990517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.728 [2024-07-22 17:56:02.990535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.006036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.006052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.021495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.021513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.036285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.036302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.053096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.053117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.068739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.068756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.084683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.084700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.100176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.100193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.111607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.989 [2024-07-22 17:56:03.111624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.989 [2024-07-22 17:56:03.126945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.126962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.142441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.142458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.157119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.157137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.173558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.173575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.188654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.188672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.204299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.204317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.219981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.219998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.235204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.235222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.990 [2024-07-22 17:56:03.250302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.990 [2024-07-22 17:56:03.250320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.266738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.266755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.282910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.282928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.294189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.294206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.310373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.310391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.326492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.326510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.337577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.337599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.353377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.353394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.369477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.369495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.385217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.385233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.401730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.401747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.417805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.417824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.433321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.433338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.449994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.450012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.461459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.461477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.476939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.476956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.493111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.493128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.505181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.505198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.250 [2024-07-22 17:56:03.520847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.250 [2024-07-22 17:56:03.520864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.537501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.537519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.554167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.554184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.570001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.570018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.585577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.585595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.601987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.602004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.618302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.618319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.634826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.634843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.651029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.651046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.667745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.667763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.683971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.683988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.700546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.700563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.716509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.716527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.727732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.727750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.743211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.743227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.759064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.759083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.511 [2024-07-22 17:56:03.773862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.511 [2024-07-22 17:56:03.773879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.790220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.790238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.805783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.805801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.822775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.822792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.838307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.838324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.852807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.852825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.864094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.864111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.879795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.879813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.896408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.896426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.913042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.913060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.929574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.929591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.946081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.946098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.962574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.962591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.974897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.974915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:03.990288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:03.990305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:04.006112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:04.006129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:04.021951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:04.021968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.771 [2024-07-22 17:56:04.037821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.771 [2024-07-22 17:56:04.037838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.053040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.053057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.069161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.069178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.084569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.084586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.100454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.100471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.116166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.116183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.131411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.131429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.146297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.146314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.163107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.163124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.178968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.178985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.192688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.192706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.208273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.208289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.224185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.224203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.235588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.235605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.251453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.251470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.267006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.267023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.281379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.281395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.032 [2024-07-22 17:56:04.292763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.032 [2024-07-22 17:56:04.292780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.308274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.308290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.323299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.323315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.338908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.338925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.355242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.355259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.366508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.366526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.382613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.382630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.398016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.398033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.413132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.413150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.425148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.425165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.440981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.440997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.456547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.456565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.471187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.471204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.486608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.486625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.501018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.501035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.516114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.516131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.531801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.531819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.546164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.293 [2024-07-22 17:56:04.546181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.293 [2024-07-22 17:56:04.557089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.294 [2024-07-22 17:56:04.557105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.572972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.572989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.589161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.589179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.604388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.604405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.620160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.620177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.631812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.631828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.647503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.647521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.662977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.662995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.677729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.677746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.689576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.689593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.705247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.705264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.721966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.721983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.738343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.738364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.754273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.754291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.765744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.765765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.781676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.781694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.797715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.797733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.809456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.809475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.555 [2024-07-22 17:56:04.824886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.555 [2024-07-22 17:56:04.824904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.840549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.840567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.855267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.855284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.866657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.866675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.882719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.882736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.898381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.898399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.912512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.912530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.923682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.923700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.939386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.939404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.955423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.955440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.971950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.971968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.983766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.983784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:04.999210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:04.999228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:05.015048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:05.015065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:05.029778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:05.029795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:05.041296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:05.041317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:05.056734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:05.056751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:05.067848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:05.067866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.816 [2024-07-22 17:56:05.082655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.816 [2024-07-22 17:56:05.082673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.094595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.094612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.106306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.106322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 00:19:01.076 Latency(us) 00:19:01.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.076 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:01.076 Nvme1n1 : 5.01 15229.49 118.98 0.00 0.00 8395.69 3831.34 16938.54 00:19:01.076 =================================================================================================================== 00:19:01.076 Total : 15229.49 118.98 0.00 0.00 8395.69 3831.34 16938.54 00:19:01.076 [2024-07-22 17:56:05.117288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.117303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.129331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.129352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.141354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.141369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.153385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.153400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.165413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.165425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.177444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.177455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.189475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.189486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.201512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.201524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.213542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.213554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 [2024-07-22 17:56:05.225575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.076 [2024-07-22 17:56:05.225588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.076 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1678312) - No such process 00:19:01.076 17:56:05 -- target/zcopy.sh@49 -- # wait 1678312 00:19:01.076 17:56:05 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:01.076 17:56:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.076 17:56:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.076 17:56:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.076 17:56:05 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:01.076 17:56:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.076 17:56:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.076 delay0 00:19:01.076 17:56:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.076 17:56:05 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:01.076 17:56:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:01.076 17:56:05 -- common/autotest_common.sh@10 -- # set +x 00:19:01.076 17:56:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:01.076 17:56:05 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:19:01.076 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.336 [2024-07-22 17:56:05.384663] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:07.945 Initializing NVMe Controllers 00:19:07.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:07.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:07.945 Initialization complete. Launching workers. 00:19:07.945 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 114 00:19:07.945 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 38 00:19:07.945 success 212, unsuccess 184, failed 0 00:19:07.945 17:56:11 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:07.945 17:56:11 -- target/zcopy.sh@60 -- # nvmftestfini 00:19:07.945 17:56:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.945 17:56:11 -- nvmf/common.sh@116 -- # sync 00:19:07.945 17:56:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.945 17:56:11 -- nvmf/common.sh@119 -- # set +e 00:19:07.945 17:56:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.945 17:56:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.945 rmmod nvme_tcp 00:19:07.945 rmmod nvme_fabrics 00:19:07.945 rmmod nvme_keyring 00:19:07.945 17:56:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.945 17:56:11 -- nvmf/common.sh@123 -- # set -e 00:19:07.945 17:56:11 -- nvmf/common.sh@124 -- # return 0 00:19:07.945 17:56:11 -- nvmf/common.sh@477 -- # '[' -n 1676414 ']' 00:19:07.945 17:56:11 -- nvmf/common.sh@478 -- # killprocess 1676414 00:19:07.945 17:56:11 -- common/autotest_common.sh@926 -- # '[' -z 1676414 ']' 00:19:07.945 17:56:11 -- common/autotest_common.sh@930 -- # kill -0 1676414 00:19:07.945 17:56:11 -- common/autotest_common.sh@931 -- # uname 00:19:07.945 17:56:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.945 17:56:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1676414 00:19:07.945 17:56:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:07.945 17:56:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:07.945 17:56:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1676414' 00:19:07.945 killing process with pid 1676414 00:19:07.945 17:56:11 -- common/autotest_common.sh@945 -- # kill 1676414 00:19:07.945 17:56:11 -- common/autotest_common.sh@950 -- # wait 1676414 00:19:07.946 17:56:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.946 17:56:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.946 17:56:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.946 17:56:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.946 17:56:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.946 17:56:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.946 17:56:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.946 17:56:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.857 17:56:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.857 00:19:09.857 real 0m34.078s 00:19:09.857 user 0m44.066s 00:19:09.857 sys 0m10.971s 00:19:09.857 17:56:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.857 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:19:09.857 ************************************ 00:19:09.857 END TEST nvmf_zcopy 00:19:09.857 ************************************ 00:19:09.857 17:56:13 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:09.857 17:56:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.857 17:56:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.857 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:19:09.857 ************************************ 00:19:09.857 START TEST nvmf_nmic 00:19:09.857 ************************************ 00:19:09.857 17:56:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:09.857 * Looking for test storage... 00:19:09.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.857 17:56:13 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.857 17:56:13 -- nvmf/common.sh@7 -- # uname -s 00:19:09.857 17:56:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.857 17:56:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.857 17:56:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.857 17:56:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.857 17:56:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.857 17:56:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.857 17:56:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.857 17:56:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.857 17:56:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.857 17:56:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.857 17:56:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:09.857 17:56:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:09.857 17:56:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.857 17:56:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.857 17:56:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.857 17:56:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.857 17:56:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.857 17:56:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.857 17:56:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.857 17:56:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.857 17:56:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.857 17:56:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.857 17:56:13 -- paths/export.sh@5 -- # export PATH 00:19:09.857 17:56:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.857 17:56:13 -- nvmf/common.sh@46 -- # : 0 00:19:09.857 17:56:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.857 17:56:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.857 17:56:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.857 17:56:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.857 17:56:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.857 17:56:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.857 17:56:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.857 17:56:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.857 17:56:13 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.857 17:56:13 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.857 17:56:13 -- target/nmic.sh@14 -- # nvmftestinit 00:19:09.857 17:56:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.857 17:56:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.857 17:56:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.857 17:56:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.857 17:56:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.857 17:56:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.857 17:56:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.857 17:56:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.857 17:56:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:09.857 17:56:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:09.857 17:56:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:09.857 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:19:17.998 17:56:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:17.998 17:56:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:17.998 17:56:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:17.998 17:56:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:17.998 17:56:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:17.998 17:56:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:17.998 17:56:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:17.998 17:56:21 -- nvmf/common.sh@294 -- # net_devs=() 00:19:17.998 17:56:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:17.998 17:56:21 -- nvmf/common.sh@295 -- # e810=() 00:19:17.998 17:56:21 -- nvmf/common.sh@295 -- # local -ga e810 00:19:17.998 17:56:21 -- nvmf/common.sh@296 -- # x722=() 00:19:17.998 17:56:21 -- nvmf/common.sh@296 -- # local -ga x722 00:19:17.998 17:56:21 -- nvmf/common.sh@297 -- # mlx=() 00:19:17.998 17:56:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:17.998 17:56:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.998 17:56:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:17.998 17:56:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:17.998 17:56:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:17.998 17:56:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:17.998 17:56:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:17.998 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:17.998 17:56:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:17.998 17:56:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:17.998 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:17.998 17:56:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:17.998 17:56:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:17.998 17:56:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:17.998 17:56:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.998 17:56:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:17.998 17:56:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.998 17:56:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:17.998 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:17.998 17:56:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.998 17:56:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:17.998 17:56:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.998 17:56:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:17.999 17:56:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.999 17:56:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:17.999 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:17.999 17:56:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.999 17:56:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:17.999 17:56:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:17.999 17:56:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:17.999 17:56:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:17.999 17:56:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:17.999 17:56:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.999 17:56:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.999 17:56:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:17.999 17:56:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:17.999 17:56:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:17.999 17:56:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:17.999 17:56:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:17.999 17:56:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:17.999 17:56:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.999 17:56:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:17.999 17:56:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:17.999 17:56:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:17.999 17:56:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:17.999 17:56:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:17.999 17:56:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:17.999 17:56:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:17.999 17:56:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.999 17:56:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.999 17:56:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.999 17:56:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:17.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:19:17.999 00:19:17.999 --- 10.0.0.2 ping statistics --- 00:19:17.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.999 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:19:17.999 17:56:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:19:17.999 00:19:17.999 --- 10.0.0.1 ping statistics --- 00:19:17.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.999 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:19:17.999 17:56:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.999 17:56:21 -- nvmf/common.sh@410 -- # return 0 00:19:17.999 17:56:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:17.999 17:56:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.999 17:56:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:17.999 17:56:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:17.999 17:56:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.999 17:56:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:17.999 17:56:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:17.999 17:56:21 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:17.999 17:56:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:17.999 17:56:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:17.999 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:19:17.999 17:56:21 -- nvmf/common.sh@469 -- # nvmfpid=1684670 00:19:17.999 17:56:21 -- nvmf/common.sh@470 -- # waitforlisten 1684670 00:19:17.999 17:56:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:17.999 17:56:21 -- common/autotest_common.sh@819 -- # '[' -z 1684670 ']' 00:19:17.999 17:56:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.999 17:56:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.999 17:56:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.999 17:56:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.999 17:56:21 -- common/autotest_common.sh@10 -- # set +x 00:19:17.999 [2024-07-22 17:56:21.859575] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:17.999 [2024-07-22 17:56:21.859640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.999 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.999 [2024-07-22 17:56:21.954134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.999 [2024-07-22 17:56:22.048089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:17.999 [2024-07-22 17:56:22.048242] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.999 [2024-07-22 17:56:22.048252] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.999 [2024-07-22 17:56:22.048259] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.999 [2024-07-22 17:56:22.048362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.999 [2024-07-22 17:56:22.048494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.999 [2024-07-22 17:56:22.048732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.999 [2024-07-22 17:56:22.048738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.571 17:56:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:18.571 17:56:22 -- common/autotest_common.sh@852 -- # return 0 00:19:18.571 17:56:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:18.571 17:56:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 17:56:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.571 17:56:22 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 [2024-07-22 17:56:22.762538] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.571 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.571 17:56:22 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 Malloc0 00:19:18.571 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.571 17:56:22 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.571 17:56:22 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.571 17:56:22 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 [2024-07-22 17:56:22.818725] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.571 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.571 17:56:22 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:18.571 test case1: single bdev can't be used in multiple subsystems 00:19:18.571 17:56:22 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.571 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.571 17:56:22 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:18.571 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.571 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.832 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.832 17:56:22 -- target/nmic.sh@28 -- # nmic_status=0 00:19:18.832 17:56:22 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:18.832 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.832 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.832 [2024-07-22 17:56:22.854680] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:18.832 [2024-07-22 17:56:22.854697] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:18.832 [2024-07-22 17:56:22.854704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:18.832 request: 00:19:18.832 { 00:19:18.832 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:18.832 "namespace": { 00:19:18.832 "bdev_name": "Malloc0" 00:19:18.832 }, 00:19:18.832 "method": "nvmf_subsystem_add_ns", 00:19:18.832 "req_id": 1 00:19:18.832 } 00:19:18.832 Got JSON-RPC error response 00:19:18.832 response: 00:19:18.832 { 00:19:18.832 "code": -32602, 00:19:18.832 "message": "Invalid parameters" 00:19:18.832 } 00:19:18.832 17:56:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:18.832 17:56:22 -- target/nmic.sh@29 -- # nmic_status=1 00:19:18.832 17:56:22 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:18.832 17:56:22 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:18.832 Adding namespace failed - expected result. 00:19:18.832 17:56:22 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:18.832 test case2: host connect to nvmf target in multiple paths 00:19:18.832 17:56:22 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:18.832 17:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:18.832 17:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:18.832 [2024-07-22 17:56:22.866806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:18.832 17:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:18.832 17:56:22 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:20.217 17:56:24 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:22.129 17:56:25 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:22.129 17:56:25 -- common/autotest_common.sh@1177 -- # local i=0 00:19:22.129 17:56:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.129 17:56:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:22.129 17:56:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:24.040 17:56:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:24.040 17:56:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:24.040 17:56:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.040 17:56:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:24.040 17:56:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.040 17:56:27 -- common/autotest_common.sh@1187 -- # return 0 00:19:24.040 17:56:27 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:24.040 [global] 00:19:24.040 thread=1 00:19:24.040 invalidate=1 00:19:24.040 rw=write 00:19:24.040 time_based=1 00:19:24.040 runtime=1 00:19:24.040 ioengine=libaio 00:19:24.040 direct=1 00:19:24.040 bs=4096 00:19:24.040 iodepth=1 00:19:24.040 norandommap=0 00:19:24.040 numjobs=1 00:19:24.040 00:19:24.040 verify_dump=1 00:19:24.040 verify_backlog=512 00:19:24.040 verify_state_save=0 00:19:24.040 do_verify=1 00:19:24.040 verify=crc32c-intel 00:19:24.040 [job0] 00:19:24.040 filename=/dev/nvme0n1 00:19:24.040 Could not set queue depth (nvme0n1) 00:19:24.040 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.040 fio-3.35 00:19:24.040 Starting 1 thread 00:19:25.427 00:19:25.427 job0: (groupid=0, jobs=1): err= 0: pid=1686071: Mon Jul 22 17:56:29 2024 00:19:25.427 read: IOPS=18, BW=74.6KiB/s (76.4kB/s)(76.0KiB/1019msec) 00:19:25.427 slat (nsec): min=25445, max=26775, avg=25892.89, stdev=424.31 00:19:25.427 clat (usec): min=872, max=42453, avg=39825.61, stdev=9433.64 00:19:25.427 lat (usec): min=898, max=42478, avg=39851.50, stdev=9433.65 00:19:25.427 clat percentiles (usec): 00:19:25.427 | 1.00th=[ 873], 5.00th=[ 873], 10.00th=[41681], 20.00th=[41681], 00:19:25.427 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:25.427 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:25.427 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:25.427 | 99.99th=[42206] 00:19:25.427 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:19:25.427 slat (nsec): min=8913, max=62893, avg=29483.32, stdev=9364.11 00:19:25.427 clat (usec): min=180, max=719, avg=474.05, stdev=90.10 00:19:25.427 lat (usec): min=190, max=772, avg=503.53, stdev=94.09 00:19:25.427 clat percentiles (usec): 00:19:25.427 | 1.00th=[ 249], 5.00th=[ 314], 10.00th=[ 359], 20.00th=[ 396], 00:19:25.427 | 30.00th=[ 424], 40.00th=[ 465], 50.00th=[ 478], 60.00th=[ 494], 00:19:25.427 | 70.00th=[ 519], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 627], 00:19:25.427 | 99.00th=[ 668], 99.50th=[ 676], 99.90th=[ 717], 99.95th=[ 717], 00:19:25.427 | 99.99th=[ 717] 00:19:25.427 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:25.427 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:25.427 lat (usec) : 250=1.13%, 500=60.08%, 750=35.22%, 1000=0.19% 00:19:25.427 lat (msec) : 50=3.39% 00:19:25.427 cpu : usr=1.77%, sys=1.18%, ctx=531, majf=0, minf=1 00:19:25.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.427 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.427 00:19:25.427 Run status group 0 (all jobs): 00:19:25.427 READ: bw=74.6KiB/s (76.4kB/s), 74.6KiB/s-74.6KiB/s (76.4kB/s-76.4kB/s), io=76.0KiB (77.8kB), run=1019-1019msec 00:19:25.427 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:19:25.427 00:19:25.427 Disk stats (read/write): 00:19:25.427 nvme0n1: ios=66/512, merge=0/0, ticks=1005/205, in_queue=1210, util=97.39% 00:19:25.427 17:56:29 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:25.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:25.427 17:56:29 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:25.427 17:56:29 -- common/autotest_common.sh@1198 -- # local i=0 00:19:25.427 17:56:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:25.427 17:56:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:25.427 17:56:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:25.427 17:56:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:25.427 17:56:29 -- common/autotest_common.sh@1210 -- # return 0 00:19:25.427 17:56:29 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:25.427 17:56:29 -- target/nmic.sh@53 -- # nvmftestfini 00:19:25.427 17:56:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:25.427 17:56:29 -- nvmf/common.sh@116 -- # sync 00:19:25.427 17:56:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:25.427 17:56:29 -- nvmf/common.sh@119 -- # set +e 00:19:25.427 17:56:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:25.427 17:56:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:25.427 rmmod nvme_tcp 00:19:25.427 rmmod nvme_fabrics 00:19:25.688 rmmod nvme_keyring 00:19:25.688 17:56:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:25.688 17:56:29 -- nvmf/common.sh@123 -- # set -e 00:19:25.688 17:56:29 -- nvmf/common.sh@124 -- # return 0 00:19:25.688 17:56:29 -- nvmf/common.sh@477 -- # '[' -n 1684670 ']' 00:19:25.688 17:56:29 -- nvmf/common.sh@478 -- # killprocess 1684670 00:19:25.688 17:56:29 -- common/autotest_common.sh@926 -- # '[' -z 1684670 ']' 00:19:25.688 17:56:29 -- common/autotest_common.sh@930 -- # kill -0 1684670 00:19:25.688 17:56:29 -- common/autotest_common.sh@931 -- # uname 00:19:25.688 17:56:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:25.688 17:56:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1684670 00:19:25.688 17:56:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:25.688 17:56:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:25.688 17:56:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1684670' 00:19:25.688 killing process with pid 1684670 00:19:25.688 17:56:29 -- common/autotest_common.sh@945 -- # kill 1684670 00:19:25.688 17:56:29 -- common/autotest_common.sh@950 -- # wait 1684670 00:19:25.688 17:56:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:25.688 17:56:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:25.688 17:56:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:25.688 17:56:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.688 17:56:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:25.688 17:56:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.688 17:56:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.688 17:56:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.232 17:56:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:28.232 00:19:28.232 real 0m18.142s 00:19:28.232 user 0m43.137s 00:19:28.232 sys 0m6.614s 00:19:28.232 17:56:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:28.232 17:56:31 -- common/autotest_common.sh@10 -- # set +x 00:19:28.232 ************************************ 00:19:28.232 END TEST nvmf_nmic 00:19:28.232 ************************************ 00:19:28.232 17:56:32 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:28.232 17:56:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:28.232 17:56:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.232 17:56:32 -- common/autotest_common.sh@10 -- # set +x 00:19:28.232 ************************************ 00:19:28.232 START TEST nvmf_fio_target 00:19:28.232 ************************************ 00:19:28.232 17:56:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:28.232 * Looking for test storage... 00:19:28.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:28.232 17:56:32 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.232 17:56:32 -- nvmf/common.sh@7 -- # uname -s 00:19:28.232 17:56:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.232 17:56:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.232 17:56:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.232 17:56:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.232 17:56:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.232 17:56:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.232 17:56:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.232 17:56:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.232 17:56:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.232 17:56:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.232 17:56:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:28.232 17:56:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:28.232 17:56:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.232 17:56:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.232 17:56:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.232 17:56:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.232 17:56:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.232 17:56:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.232 17:56:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.232 17:56:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.232 17:56:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.232 17:56:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.232 17:56:32 -- paths/export.sh@5 -- # export PATH 00:19:28.232 17:56:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.232 17:56:32 -- nvmf/common.sh@46 -- # : 0 00:19:28.232 17:56:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:28.232 17:56:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:28.232 17:56:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:28.232 17:56:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.232 17:56:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.232 17:56:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:28.232 17:56:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:28.232 17:56:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:28.232 17:56:32 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.232 17:56:32 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.232 17:56:32 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.232 17:56:32 -- target/fio.sh@16 -- # nvmftestinit 00:19:28.232 17:56:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:28.232 17:56:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.232 17:56:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:28.232 17:56:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:28.232 17:56:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:28.232 17:56:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.233 17:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.233 17:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.233 17:56:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:28.233 17:56:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:28.233 17:56:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:28.233 17:56:32 -- common/autotest_common.sh@10 -- # set +x 00:19:36.457 17:56:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:36.457 17:56:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:36.457 17:56:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:36.457 17:56:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:36.457 17:56:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:36.457 17:56:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:36.457 17:56:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:36.457 17:56:39 -- nvmf/common.sh@294 -- # net_devs=() 00:19:36.457 17:56:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:36.457 17:56:39 -- nvmf/common.sh@295 -- # e810=() 00:19:36.457 17:56:39 -- nvmf/common.sh@295 -- # local -ga e810 00:19:36.457 17:56:39 -- nvmf/common.sh@296 -- # x722=() 00:19:36.457 17:56:39 -- nvmf/common.sh@296 -- # local -ga x722 00:19:36.457 17:56:39 -- nvmf/common.sh@297 -- # mlx=() 00:19:36.457 17:56:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:36.457 17:56:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.457 17:56:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.458 17:56:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.458 17:56:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:36.458 17:56:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:36.458 17:56:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:36.458 17:56:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:36.458 17:56:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:36.458 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:36.458 17:56:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:36.458 17:56:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:36.458 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:36.458 17:56:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:36.458 17:56:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:36.458 17:56:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:36.458 17:56:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.458 17:56:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:36.458 17:56:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.458 17:56:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:36.458 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:36.458 17:56:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.458 17:56:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:36.458 17:56:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.458 17:56:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:36.458 17:56:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.458 17:56:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:36.458 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:36.458 17:56:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.458 17:56:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:36.458 17:56:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:36.458 17:56:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:36.458 17:56:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:36.458 17:56:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:36.458 17:56:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.458 17:56:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.458 17:56:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.458 17:56:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:36.458 17:56:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.458 17:56:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.458 17:56:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:36.458 17:56:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.458 17:56:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.458 17:56:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:36.458 17:56:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:36.458 17:56:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.458 17:56:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.458 17:56:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.458 17:56:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.458 17:56:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:36.458 17:56:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.458 17:56:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.458 17:56:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.458 17:56:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:36.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:19:36.458 00:19:36.458 --- 10.0.0.2 ping statistics --- 00:19:36.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.458 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:19:36.458 17:56:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:19:36.458 00:19:36.458 --- 10.0.0.1 ping statistics --- 00:19:36.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.458 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:19:36.458 17:56:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.458 17:56:40 -- nvmf/common.sh@410 -- # return 0 00:19:36.458 17:56:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:36.458 17:56:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.458 17:56:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:36.458 17:56:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:36.458 17:56:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.458 17:56:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:36.458 17:56:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:36.458 17:56:40 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:36.458 17:56:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:36.458 17:56:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:36.458 17:56:40 -- common/autotest_common.sh@10 -- # set +x 00:19:36.458 17:56:40 -- nvmf/common.sh@469 -- # nvmfpid=1690590 00:19:36.458 17:56:40 -- nvmf/common.sh@470 -- # waitforlisten 1690590 00:19:36.458 17:56:40 -- common/autotest_common.sh@819 -- # '[' -z 1690590 ']' 00:19:36.458 17:56:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:36.458 17:56:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.458 17:56:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.458 17:56:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.458 17:56:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.458 17:56:40 -- common/autotest_common.sh@10 -- # set +x 00:19:36.458 [2024-07-22 17:56:40.371032] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:36.458 [2024-07-22 17:56:40.371078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.458 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.458 [2024-07-22 17:56:40.457498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.458 [2024-07-22 17:56:40.524916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:36.458 [2024-07-22 17:56:40.525055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.458 [2024-07-22 17:56:40.525065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.458 [2024-07-22 17:56:40.525073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.458 [2024-07-22 17:56:40.525200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.458 [2024-07-22 17:56:40.525320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.458 [2024-07-22 17:56:40.525464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.458 [2024-07-22 17:56:40.525557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.719 17:56:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.719 17:56:40 -- common/autotest_common.sh@852 -- # return 0 00:19:36.719 17:56:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:36.719 17:56:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:36.719 17:56:40 -- common/autotest_common.sh@10 -- # set +x 00:19:36.719 17:56:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.719 17:56:40 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:36.979 [2024-07-22 17:56:41.083081] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.979 17:56:41 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.240 17:56:41 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:37.240 17:56:41 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.500 17:56:41 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:37.500 17:56:41 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.500 17:56:41 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:37.500 17:56:41 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:37.760 17:56:41 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:37.760 17:56:41 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:38.020 17:56:42 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.281 17:56:42 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:38.281 17:56:42 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.541 17:56:42 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:38.541 17:56:42 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:38.541 17:56:42 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:38.541 17:56:42 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:38.801 17:56:42 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:39.068 17:56:43 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:39.068 17:56:43 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.328 17:56:43 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:39.328 17:56:43 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:39.328 17:56:43 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.588 [2024-07-22 17:56:43.740891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.588 17:56:43 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:39.849 17:56:43 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:40.109 17:56:44 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:41.494 17:56:45 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:41.494 17:56:45 -- common/autotest_common.sh@1177 -- # local i=0 00:19:41.494 17:56:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:41.494 17:56:45 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:41.494 17:56:45 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:41.494 17:56:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:44.034 17:56:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:44.034 17:56:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:44.034 17:56:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:44.034 17:56:47 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:44.034 17:56:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:44.034 17:56:47 -- common/autotest_common.sh@1187 -- # return 0 00:19:44.034 17:56:47 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:44.034 [global] 00:19:44.034 thread=1 00:19:44.034 invalidate=1 00:19:44.034 rw=write 00:19:44.034 time_based=1 00:19:44.034 runtime=1 00:19:44.034 ioengine=libaio 00:19:44.034 direct=1 00:19:44.034 bs=4096 00:19:44.034 iodepth=1 00:19:44.034 norandommap=0 00:19:44.034 numjobs=1 00:19:44.034 00:19:44.034 verify_dump=1 00:19:44.034 verify_backlog=512 00:19:44.034 verify_state_save=0 00:19:44.034 do_verify=1 00:19:44.034 verify=crc32c-intel 00:19:44.034 [job0] 00:19:44.034 filename=/dev/nvme0n1 00:19:44.034 [job1] 00:19:44.034 filename=/dev/nvme0n2 00:19:44.034 [job2] 00:19:44.034 filename=/dev/nvme0n3 00:19:44.034 [job3] 00:19:44.034 filename=/dev/nvme0n4 00:19:44.034 Could not set queue depth (nvme0n1) 00:19:44.034 Could not set queue depth (nvme0n2) 00:19:44.034 Could not set queue depth (nvme0n3) 00:19:44.034 Could not set queue depth (nvme0n4) 00:19:44.034 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.034 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.034 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.034 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:44.034 fio-3.35 00:19:44.034 Starting 4 threads 00:19:45.414 00:19:45.414 job0: (groupid=0, jobs=1): err= 0: pid=1692131: Mon Jul 22 17:56:49 2024 00:19:45.414 read: IOPS=658, BW=2633KiB/s (2697kB/s)(2636KiB/1001msec) 00:19:45.415 slat (nsec): min=6208, max=55058, avg=24213.38, stdev=7616.26 00:19:45.415 clat (usec): min=447, max=1033, avg=733.02, stdev=104.04 00:19:45.415 lat (usec): min=457, max=1060, avg=757.23, stdev=107.16 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 515], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 644], 00:19:45.415 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[ 734], 60.00th=[ 766], 00:19:45.415 | 70.00th=[ 791], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 906], 00:19:45.415 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1037], 99.95th=[ 1037], 00:19:45.415 | 99.99th=[ 1037] 00:19:45.415 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:45.415 slat (nsec): min=8901, max=66464, avg=30750.59, stdev=10987.64 00:19:45.415 clat (usec): min=126, max=797, avg=445.36, stdev=115.11 00:19:45.415 lat (usec): min=136, max=833, avg=476.11, stdev=118.89 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 180], 5.00th=[ 247], 10.00th=[ 285], 20.00th=[ 351], 00:19:45.415 | 30.00th=[ 388], 40.00th=[ 420], 50.00th=[ 461], 60.00th=[ 482], 00:19:45.415 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 635], 00:19:45.415 | 99.00th=[ 685], 99.50th=[ 701], 99.90th=[ 742], 99.95th=[ 799], 00:19:45.415 | 99.99th=[ 799] 00:19:45.415 bw ( KiB/s): min= 4096, max= 4096, per=36.41%, avg=4096.00, stdev= 0.00, samples=1 00:19:45.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:45.415 lat (usec) : 250=3.27%, 500=38.03%, 750=41.00%, 1000=17.59% 00:19:45.415 lat (msec) : 2=0.12% 00:19:45.415 cpu : usr=3.10%, sys=6.40%, ctx=1684, majf=0, minf=1 00:19:45.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 issued rwts: total=659,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.415 job1: (groupid=0, jobs=1): err= 0: pid=1692144: Mon Jul 22 17:56:49 2024 00:19:45.415 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:19:45.415 slat (nsec): min=25628, max=26663, avg=25895.47, stdev=265.63 00:19:45.415 clat (usec): min=40835, max=41115, avg=40966.03, stdev=72.04 00:19:45.415 lat (usec): min=40861, max=41141, avg=40991.93, stdev=72.05 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:45.415 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:45.415 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:45.415 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:45.415 | 99.99th=[41157] 00:19:45.415 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:19:45.415 slat (nsec): min=8727, max=58718, avg=29797.60, stdev=10234.20 00:19:45.415 clat (usec): min=134, max=840, avg=405.25, stdev=130.97 00:19:45.415 lat (usec): min=143, max=872, avg=435.04, stdev=136.72 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 149], 5.00th=[ 169], 10.00th=[ 243], 20.00th=[ 297], 00:19:45.415 | 30.00th=[ 326], 40.00th=[ 355], 50.00th=[ 396], 60.00th=[ 449], 00:19:45.415 | 70.00th=[ 482], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 619], 00:19:45.415 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 840], 99.95th=[ 840], 00:19:45.415 | 99.99th=[ 840] 00:19:45.415 bw ( KiB/s): min= 4096, max= 4096, per=36.41%, avg=4096.00, stdev= 0.00, samples=1 00:19:45.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:45.415 lat (usec) : 250=10.73%, 500=60.26%, 750=25.05%, 1000=0.38% 00:19:45.415 lat (msec) : 50=3.58% 00:19:45.415 cpu : usr=0.89%, sys=1.99%, ctx=533, majf=0, minf=1 00:19:45.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.415 job2: (groupid=0, jobs=1): err= 0: pid=1692160: Mon Jul 22 17:56:49 2024 00:19:45.415 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:45.415 slat (nsec): min=6272, max=54235, avg=26553.58, stdev=3110.85 00:19:45.415 clat (usec): min=587, max=2503, avg=958.61, stdev=107.66 00:19:45.415 lat (usec): min=594, max=2530, avg=985.17, stdev=108.00 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 898], 00:19:45.415 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:19:45.415 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:19:45.415 | 99.00th=[ 1123], 99.50th=[ 1139], 99.90th=[ 2507], 99.95th=[ 2507], 00:19:45.415 | 99.99th=[ 2507] 00:19:45.415 write: IOPS=820, BW=3281KiB/s (3359kB/s)(3284KiB/1001msec); 0 zone resets 00:19:45.415 slat (nsec): min=9130, max=60700, avg=29520.96, stdev=10792.11 00:19:45.415 clat (usec): min=226, max=920, avg=561.14, stdev=130.19 00:19:45.415 lat (usec): min=236, max=954, avg=590.66, stdev=136.35 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 245], 5.00th=[ 314], 10.00th=[ 371], 20.00th=[ 453], 00:19:45.415 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 603], 00:19:45.415 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 758], 00:19:45.415 | 99.00th=[ 832], 99.50th=[ 848], 99.90th=[ 922], 99.95th=[ 922], 00:19:45.415 | 99.99th=[ 922] 00:19:45.415 bw ( KiB/s): min= 4096, max= 4096, per=36.41%, avg=4096.00, stdev= 0.00, samples=1 00:19:45.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:45.415 lat (usec) : 250=1.13%, 500=16.88%, 750=41.04%, 1000=28.96% 00:19:45.415 lat (msec) : 2=11.93%, 4=0.08% 00:19:45.415 cpu : usr=2.70%, sys=5.00%, ctx=1334, majf=0, minf=1 00:19:45.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 issued rwts: total=512,821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.415 job3: (groupid=0, jobs=1): err= 0: pid=1692165: Mon Jul 22 17:56:49 2024 00:19:45.415 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:19:45.415 slat (nsec): min=24449, max=24982, avg=24766.26, stdev=134.65 00:19:45.415 clat (usec): min=891, max=41801, avg=30633.60, stdev=18017.59 00:19:45.415 lat (usec): min=916, max=41826, avg=30658.36, stdev=18017.58 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 889], 5.00th=[ 938], 10.00th=[ 955], 20.00th=[ 996], 00:19:45.415 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:45.415 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:45.415 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:45.415 | 99.99th=[41681] 00:19:45.415 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:19:45.415 slat (nsec): min=9894, max=68851, avg=30333.78, stdev=8910.97 00:19:45.415 clat (usec): min=175, max=1840, avg=576.18, stdev=141.82 00:19:45.415 lat (usec): min=185, max=1873, avg=606.51, stdev=144.88 00:19:45.415 clat percentiles (usec): 00:19:45.415 | 1.00th=[ 253], 5.00th=[ 326], 10.00th=[ 404], 20.00th=[ 465], 00:19:45.415 | 30.00th=[ 523], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:19:45.415 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 775], 00:19:45.415 | 99.00th=[ 832], 99.50th=[ 898], 99.90th=[ 1844], 99.95th=[ 1844], 00:19:45.415 | 99.99th=[ 1844] 00:19:45.415 bw ( KiB/s): min= 4096, max= 4096, per=36.41%, avg=4096.00, stdev= 0.00, samples=1 00:19:45.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:45.415 lat (usec) : 250=0.93%, 500=23.18%, 750=64.67%, 1000=7.66% 00:19:45.415 lat (msec) : 2=0.37%, 50=3.18% 00:19:45.415 cpu : usr=0.79%, sys=1.47%, ctx=536, majf=0, minf=1 00:19:45.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:45.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.415 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:45.415 00:19:45.415 Run status group 0 (all jobs): 00:19:45.415 READ: bw=4757KiB/s (4871kB/s), 75.5KiB/s-2633KiB/s (77.3kB/s-2697kB/s), io=4852KiB (4968kB), run=1001-1020msec 00:19:45.415 WRITE: bw=11.0MiB/s (11.5MB/s), 2008KiB/s-4092KiB/s (2056kB/s-4190kB/s), io=11.2MiB (11.8MB), run=1001-1020msec 00:19:45.415 00:19:45.415 Disk stats (read/write): 00:19:45.415 nvme0n1: ios=538/926, merge=0/0, ticks=1311/365, in_queue=1676, util=97.70% 00:19:45.415 nvme0n2: ios=65/512, merge=0/0, ticks=1763/145, in_queue=1908, util=98.07% 00:19:45.415 nvme0n3: ios=535/558, merge=0/0, ticks=1452/257, in_queue=1709, util=97.80% 00:19:45.415 nvme0n4: ios=40/512, merge=0/0, ticks=1442/282, in_queue=1724, util=97.99% 00:19:45.415 17:56:49 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:45.415 [global] 00:19:45.415 thread=1 00:19:45.415 invalidate=1 00:19:45.415 rw=randwrite 00:19:45.415 time_based=1 00:19:45.415 runtime=1 00:19:45.415 ioengine=libaio 00:19:45.415 direct=1 00:19:45.415 bs=4096 00:19:45.415 iodepth=1 00:19:45.415 norandommap=0 00:19:45.415 numjobs=1 00:19:45.415 00:19:45.415 verify_dump=1 00:19:45.415 verify_backlog=512 00:19:45.415 verify_state_save=0 00:19:45.415 do_verify=1 00:19:45.415 verify=crc32c-intel 00:19:45.415 [job0] 00:19:45.415 filename=/dev/nvme0n1 00:19:45.415 [job1] 00:19:45.415 filename=/dev/nvme0n2 00:19:45.415 [job2] 00:19:45.415 filename=/dev/nvme0n3 00:19:45.415 [job3] 00:19:45.415 filename=/dev/nvme0n4 00:19:45.415 Could not set queue depth (nvme0n1) 00:19:45.415 Could not set queue depth (nvme0n2) 00:19:45.415 Could not set queue depth (nvme0n3) 00:19:45.415 Could not set queue depth (nvme0n4) 00:19:45.675 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.675 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.675 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.675 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:45.675 fio-3.35 00:19:45.675 Starting 4 threads 00:19:47.062 00:19:47.062 job0: (groupid=0, jobs=1): err= 0: pid=1692562: Mon Jul 22 17:56:50 2024 00:19:47.062 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1007msec) 00:19:47.062 slat (nsec): min=24061, max=24590, avg=24279.76, stdev=169.77 00:19:47.062 clat (usec): min=41006, max=42030, avg=41801.67, stdev=356.71 00:19:47.062 lat (usec): min=41030, max=42054, avg=41825.95, stdev=356.72 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:19:47.062 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:47.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:47.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:47.062 | 99.99th=[42206] 00:19:47.062 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:19:47.062 slat (nsec): min=8632, max=49954, avg=26307.92, stdev=9134.96 00:19:47.062 clat (usec): min=130, max=2395, avg=543.27, stdev=179.47 00:19:47.062 lat (usec): min=139, max=2424, avg=569.57, stdev=182.46 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[ 147], 5.00th=[ 289], 10.00th=[ 347], 20.00th=[ 429], 00:19:47.062 | 30.00th=[ 461], 40.00th=[ 494], 50.00th=[ 545], 60.00th=[ 586], 00:19:47.062 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 709], 95.00th=[ 758], 00:19:47.062 | 99.00th=[ 840], 99.50th=[ 1467], 99.90th=[ 2409], 99.95th=[ 2409], 00:19:47.062 | 99.99th=[ 2409] 00:19:47.062 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.062 lat (usec) : 250=2.84%, 500=37.05%, 750=51.98%, 1000=3.97% 00:19:47.062 lat (msec) : 2=0.76%, 4=0.19%, 50=3.21% 00:19:47.062 cpu : usr=0.99%, sys=0.99%, ctx=529, majf=0, minf=1 00:19:47.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.062 job1: (groupid=0, jobs=1): err= 0: pid=1692563: Mon Jul 22 17:56:50 2024 00:19:47.062 read: IOPS=523, BW=2094KiB/s (2144kB/s)(2096KiB/1001msec) 00:19:47.062 slat (nsec): min=6178, max=63260, avg=23431.74, stdev=8548.34 00:19:47.062 clat (usec): min=362, max=1062, avg=750.66, stdev=116.56 00:19:47.062 lat (usec): min=383, max=1089, avg=774.09, stdev=118.48 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[ 457], 5.00th=[ 570], 10.00th=[ 611], 20.00th=[ 660], 00:19:47.062 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[ 742], 60.00th=[ 766], 00:19:47.062 | 70.00th=[ 799], 80.00th=[ 857], 90.00th=[ 914], 95.00th=[ 947], 00:19:47.062 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1057], 99.95th=[ 1057], 00:19:47.062 | 99.99th=[ 1057] 00:19:47.062 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:47.062 slat (nsec): min=8489, max=68365, avg=26781.21, stdev=11109.13 00:19:47.062 clat (usec): min=194, max=1860, avg=542.14, stdev=150.55 00:19:47.062 lat (usec): min=203, max=1906, avg=568.92, stdev=157.07 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[ 215], 5.00th=[ 258], 10.00th=[ 318], 20.00th=[ 429], 00:19:47.062 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:19:47.062 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:19:47.062 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 1549], 99.95th=[ 1860], 00:19:47.062 | 99.99th=[ 1860] 00:19:47.062 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.062 lat (usec) : 250=2.45%, 500=20.74%, 750=57.95%, 1000=18.28% 00:19:47.062 lat (msec) : 2=0.58% 00:19:47.062 cpu : usr=2.90%, sys=5.00%, ctx=1551, majf=0, minf=1 00:19:47.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 issued rwts: total=524,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.062 job2: (groupid=0, jobs=1): err= 0: pid=1692564: Mon Jul 22 17:56:50 2024 00:19:47.062 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1008msec) 00:19:47.062 slat (nsec): min=26038, max=27613, avg=26650.78, stdev=407.57 00:19:47.062 clat (usec): min=40773, max=42093, avg=41680.24, stdev=484.24 00:19:47.062 lat (usec): min=40800, max=42120, avg=41706.89, stdev=483.94 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:47.062 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:19:47.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:47.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:47.062 | 99.99th=[42206] 00:19:47.062 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:19:47.062 slat (nsec): min=8592, max=63709, avg=27843.14, stdev=11646.81 00:19:47.062 clat (usec): min=176, max=4073, avg=463.52, stdev=201.09 00:19:47.062 lat (usec): min=186, max=4136, avg=491.37, stdev=205.16 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[ 231], 5.00th=[ 258], 10.00th=[ 297], 20.00th=[ 347], 00:19:47.062 | 30.00th=[ 383], 40.00th=[ 437], 50.00th=[ 461], 60.00th=[ 482], 00:19:47.062 | 70.00th=[ 506], 80.00th=[ 545], 90.00th=[ 619], 95.00th=[ 685], 00:19:47.062 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 4080], 99.95th=[ 4080], 00:19:47.062 | 99.99th=[ 4080] 00:19:47.062 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=1 00:19:47.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:47.062 lat (usec) : 250=3.96%, 500=61.32%, 750=30.00%, 1000=1.13% 00:19:47.062 lat (msec) : 10=0.19%, 50=3.40% 00:19:47.062 cpu : usr=1.49%, sys=1.29%, ctx=531, majf=0, minf=1 00:19:47.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.062 job3: (groupid=0, jobs=1): err= 0: pid=1692566: Mon Jul 22 17:56:50 2024 00:19:47.062 read: IOPS=544, BW=2180KiB/s (2232kB/s)(2232KiB/1024msec) 00:19:47.062 slat (nsec): min=6711, max=47447, avg=24671.95, stdev=6854.16 00:19:47.062 clat (usec): min=384, max=41653, avg=998.55, stdev=2962.25 00:19:47.062 lat (usec): min=391, max=41679, avg=1023.23, stdev=2962.47 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[ 502], 5.00th=[ 619], 10.00th=[ 668], 20.00th=[ 725], 00:19:47.062 | 30.00th=[ 750], 40.00th=[ 766], 50.00th=[ 799], 60.00th=[ 816], 00:19:47.062 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 898], 00:19:47.062 | 99.00th=[ 955], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:47.062 | 99.99th=[41681] 00:19:47.062 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:19:47.062 slat (nsec): min=8933, max=57864, avg=29692.86, stdev=9442.83 00:19:47.062 clat (usec): min=148, max=1269, avg=400.18, stdev=113.62 00:19:47.062 lat (usec): min=160, max=1309, avg=429.87, stdev=117.09 00:19:47.062 clat percentiles (usec): 00:19:47.062 | 1.00th=[ 204], 5.00th=[ 239], 10.00th=[ 265], 20.00th=[ 302], 00:19:47.062 | 30.00th=[ 322], 40.00th=[ 347], 50.00th=[ 400], 60.00th=[ 433], 00:19:47.062 | 70.00th=[ 457], 80.00th=[ 498], 90.00th=[ 553], 95.00th=[ 586], 00:19:47.062 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 1045], 99.95th=[ 1270], 00:19:47.062 | 99.99th=[ 1270] 00:19:47.062 bw ( KiB/s): min= 4096, max= 4096, per=34.13%, avg=4096.00, stdev= 0.00, samples=2 00:19:47.062 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:47.062 lat (usec) : 250=4.55%, 500=48.17%, 750=22.88%, 1000=24.08% 00:19:47.062 lat (msec) : 2=0.13%, 50=0.19% 00:19:47.062 cpu : usr=2.93%, sys=5.96%, ctx=1583, majf=0, minf=1 00:19:47.062 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:47.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:47.062 issued rwts: total=558,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:47.062 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:47.062 00:19:47.062 Run status group 0 (all jobs): 00:19:47.062 READ: bw=4363KiB/s (4468kB/s), 67.5KiB/s-2180KiB/s (69.1kB/s-2232kB/s), io=4468KiB (4575kB), run=1001-1024msec 00:19:47.062 WRITE: bw=11.7MiB/s (12.3MB/s), 2032KiB/s-4092KiB/s (2081kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1024msec 00:19:47.062 00:19:47.062 Disk stats (read/write): 00:19:47.062 nvme0n1: ios=63/512, merge=0/0, ticks=593/258, in_queue=851, util=87.78% 00:19:47.062 nvme0n2: ios=541/790, merge=0/0, ticks=1106/367, in_queue=1473, util=99.39% 00:19:47.063 nvme0n3: ios=37/512, merge=0/0, ticks=1539/193, in_queue=1732, util=97.80% 00:19:47.063 nvme0n4: ios=569/994, merge=0/0, ticks=802/295, in_queue=1097, util=97.99% 00:19:47.063 17:56:50 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:47.063 [global] 00:19:47.063 thread=1 00:19:47.063 invalidate=1 00:19:47.063 rw=write 00:19:47.063 time_based=1 00:19:47.063 runtime=1 00:19:47.063 ioengine=libaio 00:19:47.063 direct=1 00:19:47.063 bs=4096 00:19:47.063 iodepth=128 00:19:47.063 norandommap=0 00:19:47.063 numjobs=1 00:19:47.063 00:19:47.063 verify_dump=1 00:19:47.063 verify_backlog=512 00:19:47.063 verify_state_save=0 00:19:47.063 do_verify=1 00:19:47.063 verify=crc32c-intel 00:19:47.063 [job0] 00:19:47.063 filename=/dev/nvme0n1 00:19:47.063 [job1] 00:19:47.063 filename=/dev/nvme0n2 00:19:47.063 [job2] 00:19:47.063 filename=/dev/nvme0n3 00:19:47.063 [job3] 00:19:47.063 filename=/dev/nvme0n4 00:19:47.063 Could not set queue depth (nvme0n1) 00:19:47.063 Could not set queue depth (nvme0n2) 00:19:47.063 Could not set queue depth (nvme0n3) 00:19:47.063 Could not set queue depth (nvme0n4) 00:19:47.063 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.063 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.063 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.063 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:47.063 fio-3.35 00:19:47.063 Starting 4 threads 00:19:48.445 00:19:48.445 job0: (groupid=0, jobs=1): err= 0: pid=1693012: Mon Jul 22 17:56:52 2024 00:19:48.446 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:19:48.446 slat (nsec): min=1243, max=17670k, avg=94617.56, stdev=730166.73 00:19:48.446 clat (usec): min=3771, max=33398, avg=13663.55, stdev=4491.76 00:19:48.446 lat (usec): min=3776, max=33404, avg=13758.17, stdev=4549.18 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 6587], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[10814], 00:19:48.446 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:19:48.446 | 70.00th=[13960], 80.00th=[15664], 90.00th=[19792], 95.00th=[23462], 00:19:48.446 | 99.00th=[30016], 99.50th=[30802], 99.90th=[33424], 99.95th=[33424], 00:19:48.446 | 99.99th=[33424] 00:19:48.446 write: IOPS=4836, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1007msec); 0 zone resets 00:19:48.446 slat (usec): min=2, max=17107, avg=106.97, stdev=791.53 00:19:48.446 clat (usec): min=3764, max=58144, avg=13202.07, stdev=8596.39 00:19:48.446 lat (usec): min=3772, max=58153, avg=13309.04, stdev=8658.43 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 3851], 5.00th=[ 6259], 10.00th=[ 7373], 20.00th=[ 8455], 00:19:48.446 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11338], 00:19:48.446 | 70.00th=[12387], 80.00th=[16319], 90.00th=[22414], 95.00th=[34341], 00:19:48.446 | 99.00th=[50594], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:19:48.446 | 99.99th=[57934] 00:19:48.446 bw ( KiB/s): min=17464, max=20480, per=20.84%, avg=18972.00, stdev=2132.63, samples=2 00:19:48.446 iops : min= 4366, max= 5120, avg=4743.00, stdev=533.16, samples=2 00:19:48.446 lat (msec) : 4=0.87%, 10=29.78%, 20=59.63%, 50=9.15%, 100=0.57% 00:19:48.446 cpu : usr=4.47%, sys=4.97%, ctx=313, majf=0, minf=1 00:19:48.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:48.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.446 issued rwts: total=4608,4870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.446 job1: (groupid=0, jobs=1): err= 0: pid=1693013: Mon Jul 22 17:56:52 2024 00:19:48.446 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:19:48.446 slat (nsec): min=1198, max=14227k, avg=76004.90, stdev=553393.61 00:19:48.446 clat (usec): min=2403, max=33605, avg=9694.21, stdev=3297.88 00:19:48.446 lat (usec): min=2407, max=39238, avg=9770.21, stdev=3340.44 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 4817], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 8291], 00:19:48.446 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:19:48.446 | 70.00th=[ 9372], 80.00th=[10552], 90.00th=[12387], 95.00th=[15270], 00:19:48.446 | 99.00th=[25297], 99.50th=[28967], 99.90th=[30278], 99.95th=[30278], 00:19:48.446 | 99.99th=[33817] 00:19:48.446 write: IOPS=6768, BW=26.4MiB/s (27.7MB/s)(26.5MiB/1003msec); 0 zone resets 00:19:48.446 slat (usec): min=2, max=16750, avg=67.84, stdev=437.53 00:19:48.446 clat (usec): min=667, max=32799, avg=9204.39, stdev=3279.36 00:19:48.446 lat (usec): min=1361, max=32831, avg=9272.23, stdev=3304.79 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 4293], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7832], 00:19:48.446 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:19:48.446 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[11863], 95.00th=[17957], 00:19:48.446 | 99.00th=[21627], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:19:48.446 | 99.99th=[32900] 00:19:48.446 bw ( KiB/s): min=24576, max=28784, per=29.30%, avg=26680.00, stdev=2975.51, samples=2 00:19:48.446 iops : min= 6144, max= 7196, avg=6670.00, stdev=743.88, samples=2 00:19:48.446 lat (usec) : 750=0.01% 00:19:48.446 lat (msec) : 2=0.07%, 4=0.18%, 10=80.21%, 20=17.11%, 50=2.42% 00:19:48.446 cpu : usr=2.99%, sys=6.69%, ctx=802, majf=0, minf=1 00:19:48.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:48.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.446 issued rwts: total=6656,6789,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.446 job2: (groupid=0, jobs=1): err= 0: pid=1693014: Mon Jul 22 17:56:52 2024 00:19:48.446 read: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1004msec) 00:19:48.446 slat (nsec): min=1232, max=19081k, avg=93132.52, stdev=742248.93 00:19:48.446 clat (usec): min=2604, max=37124, avg=12423.41, stdev=5552.92 00:19:48.446 lat (usec): min=3332, max=37146, avg=12516.54, stdev=5596.39 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7898], 00:19:48.446 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[11076], 60.00th=[12518], 00:19:48.446 | 70.00th=[13698], 80.00th=[17171], 90.00th=[21890], 95.00th=[24249], 00:19:48.446 | 99.00th=[27132], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:19:48.446 | 99.99th=[36963] 00:19:48.446 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:19:48.446 slat (usec): min=2, max=11089, avg=65.28, stdev=516.83 00:19:48.446 clat (usec): min=1177, max=27116, avg=8364.57, stdev=3290.96 00:19:48.446 lat (usec): min=1186, max=27119, avg=8429.85, stdev=3312.28 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 2442], 5.00th=[ 4015], 10.00th=[ 4752], 20.00th=[ 6194], 00:19:48.446 | 30.00th=[ 6849], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8029], 00:19:48.446 | 70.00th=[ 8586], 80.00th=[10945], 90.00th=[13304], 95.00th=[15533], 00:19:48.446 | 99.00th=[17171], 99.50th=[21365], 99.90th=[21890], 99.95th=[27132], 00:19:48.446 | 99.99th=[27132] 00:19:48.446 bw ( KiB/s): min=20168, max=28984, per=26.99%, avg=24576.00, stdev=6233.85, samples=2 00:19:48.446 iops : min= 5042, max= 7246, avg=6144.00, stdev=1558.46, samples=2 00:19:48.446 lat (msec) : 2=0.40%, 4=2.37%, 10=57.53%, 20=33.42%, 50=6.28% 00:19:48.446 cpu : usr=4.09%, sys=5.98%, ctx=391, majf=0, minf=1 00:19:48.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:48.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.446 issued rwts: total=6114,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.446 job3: (groupid=0, jobs=1): err= 0: pid=1693015: Mon Jul 22 17:56:52 2024 00:19:48.446 read: IOPS=4719, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1003msec) 00:19:48.446 slat (nsec): min=1180, max=15687k, avg=101912.83, stdev=748916.32 00:19:48.446 clat (usec): min=1077, max=53491, avg=12705.59, stdev=6241.51 00:19:48.446 lat (usec): min=4547, max=53499, avg=12807.50, stdev=6284.57 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 5211], 5.00th=[ 7373], 10.00th=[ 8356], 20.00th=[ 9503], 00:19:48.446 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10814], 00:19:48.446 | 70.00th=[12387], 80.00th=[15926], 90.00th=[20317], 95.00th=[25560], 00:19:48.446 | 99.00th=[43254], 99.50th=[44827], 99.90th=[53216], 99.95th=[53740], 00:19:48.446 | 99.99th=[53740] 00:19:48.446 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:19:48.446 slat (usec): min=2, max=18083, avg=83.43, stdev=572.52 00:19:48.446 clat (usec): min=763, max=76268, avg=13060.31, stdev=10785.79 00:19:48.446 lat (usec): min=774, max=76275, avg=13143.73, stdev=10839.64 00:19:48.446 clat percentiles (usec): 00:19:48.446 | 1.00th=[ 2114], 5.00th=[ 5145], 10.00th=[ 5997], 20.00th=[ 7308], 00:19:48.446 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10159], 00:19:48.446 | 70.00th=[10421], 80.00th=[13435], 90.00th=[26870], 95.00th=[35390], 00:19:48.446 | 99.00th=[64750], 99.50th=[70779], 99.90th=[76022], 99.95th=[76022], 00:19:48.446 | 99.99th=[76022] 00:19:48.446 bw ( KiB/s): min=16384, max=24568, per=22.49%, avg=20476.00, stdev=5786.96, samples=2 00:19:48.446 iops : min= 4096, max= 6142, avg=5119.00, stdev=1446.74, samples=2 00:19:48.446 lat (usec) : 1000=0.11% 00:19:48.446 lat (msec) : 2=0.38%, 4=0.94%, 10=41.85%, 20=43.64%, 50=11.95% 00:19:48.446 lat (msec) : 100=1.13% 00:19:48.446 cpu : usr=3.89%, sys=4.79%, ctx=524, majf=0, minf=1 00:19:48.446 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:48.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:48.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:48.446 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:48.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:48.446 00:19:48.446 Run status group 0 (all jobs): 00:19:48.446 READ: bw=85.8MiB/s (89.9MB/s), 17.9MiB/s-25.9MiB/s (18.7MB/s-27.2MB/s), io=86.4MiB (90.6MB), run=1003-1007msec 00:19:48.446 WRITE: bw=88.9MiB/s (93.2MB/s), 18.9MiB/s-26.4MiB/s (19.8MB/s-27.7MB/s), io=89.5MiB (93.9MB), run=1003-1007msec 00:19:48.446 00:19:48.446 Disk stats (read/write): 00:19:48.446 nvme0n1: ios=4130/4234, merge=0/0, ticks=47001/37922, in_queue=84923, util=99.90% 00:19:48.446 nvme0n2: ios=5592/5632, merge=0/0, ticks=33196/30569, in_queue=63765, util=98.98% 00:19:48.446 nvme0n3: ios=5120/5367, merge=0/0, ticks=60547/43080, in_queue=103627, util=88.82% 00:19:48.446 nvme0n4: ios=3670/4165, merge=0/0, ticks=30725/35893, in_queue=66618, util=96.73% 00:19:48.446 17:56:52 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:48.446 [global] 00:19:48.446 thread=1 00:19:48.446 invalidate=1 00:19:48.446 rw=randwrite 00:19:48.446 time_based=1 00:19:48.446 runtime=1 00:19:48.446 ioengine=libaio 00:19:48.446 direct=1 00:19:48.446 bs=4096 00:19:48.446 iodepth=128 00:19:48.446 norandommap=0 00:19:48.446 numjobs=1 00:19:48.446 00:19:48.446 verify_dump=1 00:19:48.446 verify_backlog=512 00:19:48.446 verify_state_save=0 00:19:48.446 do_verify=1 00:19:48.446 verify=crc32c-intel 00:19:48.446 [job0] 00:19:48.446 filename=/dev/nvme0n1 00:19:48.446 [job1] 00:19:48.446 filename=/dev/nvme0n2 00:19:48.446 [job2] 00:19:48.446 filename=/dev/nvme0n3 00:19:48.446 [job3] 00:19:48.446 filename=/dev/nvme0n4 00:19:48.446 Could not set queue depth (nvme0n1) 00:19:48.446 Could not set queue depth (nvme0n2) 00:19:48.446 Could not set queue depth (nvme0n3) 00:19:48.446 Could not set queue depth (nvme0n4) 00:19:48.706 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.706 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.706 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.706 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:48.706 fio-3.35 00:19:48.706 Starting 4 threads 00:19:50.088 00:19:50.088 job0: (groupid=0, jobs=1): err= 0: pid=1693495: Mon Jul 22 17:56:54 2024 00:19:50.088 read: IOPS=7251, BW=28.3MiB/s (29.7MB/s)(28.5MiB/1006msec) 00:19:50.088 slat (nsec): min=1199, max=10607k, avg=62084.68, stdev=516116.13 00:19:50.088 clat (usec): min=2014, max=25649, avg=8794.68, stdev=2479.44 00:19:50.088 lat (usec): min=2615, max=26894, avg=8856.76, stdev=2502.34 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 3458], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 7177], 00:19:50.088 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:19:50.088 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11863], 95.00th=[13435], 00:19:50.088 | 99.00th=[17957], 99.50th=[17957], 99.90th=[19530], 99.95th=[19530], 00:19:50.088 | 99.99th=[25560] 00:19:50.088 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:19:50.088 slat (usec): min=2, max=10718, avg=58.64, stdev=484.34 00:19:50.088 clat (usec): min=1183, max=30499, avg=8246.42, stdev=3359.62 00:19:50.088 lat (usec): min=1190, max=30510, avg=8305.05, stdev=3374.46 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 3163], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5604], 00:19:50.088 | 30.00th=[ 6849], 40.00th=[ 7701], 50.00th=[ 8094], 60.00th=[ 8291], 00:19:50.088 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[11600], 95.00th=[13173], 00:19:50.088 | 99.00th=[20841], 99.50th=[28443], 99.90th=[30016], 99.95th=[30540], 00:19:50.088 | 99.99th=[30540] 00:19:50.088 bw ( KiB/s): min=28720, max=32712, per=32.16%, avg=30716.00, stdev=2822.77, samples=2 00:19:50.088 iops : min= 7180, max= 8178, avg=7679.00, stdev=705.69, samples=2 00:19:50.088 lat (msec) : 2=0.17%, 4=2.81%, 10=76.46%, 20=19.94%, 50=0.61% 00:19:50.088 cpu : usr=5.57%, sys=7.86%, ctx=416, majf=0, minf=1 00:19:50.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:50.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.088 issued rwts: total=7295,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.088 job1: (groupid=0, jobs=1): err= 0: pid=1693496: Mon Jul 22 17:56:54 2024 00:19:50.088 read: IOPS=6182, BW=24.1MiB/s (25.3MB/s)(24.2MiB/1004msec) 00:19:50.088 slat (nsec): min=1163, max=9595.5k, avg=72991.25, stdev=464657.25 00:19:50.088 clat (usec): min=1167, max=31054, avg=9779.91, stdev=2948.47 00:19:50.088 lat (usec): min=2301, max=31056, avg=9852.90, stdev=2977.91 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 3556], 5.00th=[ 5145], 10.00th=[ 6783], 20.00th=[ 7832], 00:19:50.088 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10028], 00:19:50.088 | 70.00th=[10290], 80.00th=[11338], 90.00th=[13042], 95.00th=[13960], 00:19:50.088 | 99.00th=[23987], 99.50th=[23987], 99.90th=[27919], 99.95th=[27919], 00:19:50.088 | 99.99th=[31065] 00:19:50.088 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:19:50.088 slat (nsec): min=1944, max=8183.8k, avg=74170.15, stdev=439931.46 00:19:50.088 clat (usec): min=1181, max=28418, avg=10015.24, stdev=4240.36 00:19:50.088 lat (usec): min=1191, max=28427, avg=10089.41, stdev=4266.45 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 2073], 5.00th=[ 4293], 10.00th=[ 5276], 20.00th=[ 6915], 00:19:50.088 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 9503], 60.00th=[10290], 00:19:50.088 | 70.00th=[11469], 80.00th=[13173], 90.00th=[14615], 95.00th=[17171], 00:19:50.088 | 99.00th=[24773], 99.50th=[26608], 99.90th=[28443], 99.95th=[28443], 00:19:50.088 | 99.99th=[28443] 00:19:50.088 bw ( KiB/s): min=24056, max=28729, per=27.63%, avg=26392.50, stdev=3304.31, samples=2 00:19:50.088 iops : min= 6014, max= 7182, avg=6598.00, stdev=825.90, samples=2 00:19:50.088 lat (msec) : 2=0.41%, 4=2.36%, 10=52.39%, 20=42.35%, 50=2.49% 00:19:50.088 cpu : usr=4.49%, sys=5.18%, ctx=595, majf=0, minf=1 00:19:50.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:50.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.088 issued rwts: total=6207,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.088 job2: (groupid=0, jobs=1): err= 0: pid=1693499: Mon Jul 22 17:56:54 2024 00:19:50.088 read: IOPS=4015, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1004msec) 00:19:50.088 slat (nsec): min=1199, max=7400.4k, avg=122643.11, stdev=660682.55 00:19:50.088 clat (usec): min=1297, max=34311, avg=15440.36, stdev=6022.28 00:19:50.088 lat (usec): min=6479, max=34318, avg=15563.01, stdev=6046.50 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 7111], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10159], 00:19:50.088 | 30.00th=[10552], 40.00th=[12780], 50.00th=[14746], 60.00th=[15795], 00:19:50.088 | 70.00th=[17171], 80.00th=[19792], 90.00th=[24511], 95.00th=[28181], 00:19:50.088 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:19:50.088 | 99.99th=[34341] 00:19:50.088 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:50.088 slat (nsec): min=1987, max=11846k, avg=114317.42, stdev=656222.32 00:19:50.088 clat (usec): min=1164, max=41581, avg=15895.95, stdev=7520.82 00:19:50.088 lat (usec): min=1175, max=41589, avg=16010.27, stdev=7562.92 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 5735], 5.00th=[ 6652], 10.00th=[ 7898], 20.00th=[ 9634], 00:19:50.088 | 30.00th=[10945], 40.00th=[11994], 50.00th=[14091], 60.00th=[16450], 00:19:50.088 | 70.00th=[19792], 80.00th=[21365], 90.00th=[25560], 95.00th=[31589], 00:19:50.088 | 99.00th=[39060], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:19:50.088 | 99.99th=[41681] 00:19:50.088 bw ( KiB/s): min=12288, max=20521, per=17.17%, avg=16404.50, stdev=5821.61, samples=2 00:19:50.088 iops : min= 3072, max= 5130, avg=4101.00, stdev=1455.23, samples=2 00:19:50.088 lat (msec) : 2=0.04%, 10=19.50%, 20=57.10%, 50=23.36% 00:19:50.088 cpu : usr=2.69%, sys=3.79%, ctx=474, majf=0, minf=1 00:19:50.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:50.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.088 issued rwts: total=4032,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.088 job3: (groupid=0, jobs=1): err= 0: pid=1693500: Mon Jul 22 17:56:54 2024 00:19:50.088 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:19:50.088 slat (nsec): min=1282, max=21147k, avg=107666.12, stdev=792085.81 00:19:50.088 clat (usec): min=3685, max=57390, avg=13507.32, stdev=8994.36 00:19:50.088 lat (usec): min=3703, max=57407, avg=13614.98, stdev=9060.18 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 4178], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8586], 00:19:50.088 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10945], 60.00th=[11600], 00:19:50.088 | 70.00th=[12780], 80.00th=[15139], 90.00th=[22152], 95.00th=[31065], 00:19:50.088 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57410], 99.95th=[57410], 00:19:50.088 | 99.99th=[57410] 00:19:50.088 write: IOPS=5555, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1006msec); 0 zone resets 00:19:50.088 slat (usec): min=2, max=8736, avg=75.05, stdev=533.89 00:19:50.088 clat (usec): min=1165, max=50198, avg=10400.57, stdev=4948.13 00:19:50.088 lat (usec): min=3346, max=50201, avg=10475.62, stdev=4960.63 00:19:50.088 clat percentiles (usec): 00:19:50.088 | 1.00th=[ 4424], 5.00th=[ 5866], 10.00th=[ 6259], 20.00th=[ 7308], 00:19:50.089 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:19:50.089 | 70.00th=[10421], 80.00th=[11469], 90.00th=[14222], 95.00th=[16909], 00:19:50.089 | 99.00th=[38536], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:19:50.089 | 99.99th=[50070] 00:19:50.089 bw ( KiB/s): min=16416, max=27304, per=22.89%, avg=21860.00, stdev=7698.98, samples=2 00:19:50.089 iops : min= 4104, max= 6826, avg=5465.00, stdev=1924.74, samples=2 00:19:50.089 lat (msec) : 2=0.01%, 4=0.55%, 10=50.37%, 20=42.09%, 50=5.85% 00:19:50.089 lat (msec) : 100=1.14% 00:19:50.089 cpu : usr=4.18%, sys=5.47%, ctx=367, majf=0, minf=1 00:19:50.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:50.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:50.089 issued rwts: total=5120,5589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:50.089 00:19:50.089 Run status group 0 (all jobs): 00:19:50.089 READ: bw=88.0MiB/s (92.2MB/s), 15.7MiB/s-28.3MiB/s (16.4MB/s-29.7MB/s), io=88.5MiB (92.8MB), run=1004-1006msec 00:19:50.089 WRITE: bw=93.3MiB/s (97.8MB/s), 15.9MiB/s-29.8MiB/s (16.7MB/s-31.3MB/s), io=93.8MiB (98.4MB), run=1004-1006msec 00:19:50.089 00:19:50.089 Disk stats (read/write): 00:19:50.089 nvme0n1: ios=6168/6302, merge=0/0, ticks=50553/45482, in_queue=96035, util=98.90% 00:19:50.089 nvme0n2: ios=5659/5639, merge=0/0, ticks=37061/36484, in_queue=73545, util=88.21% 00:19:50.089 nvme0n3: ios=3351/3584, merge=0/0, ticks=20663/20279, in_queue=40942, util=88.61% 00:19:50.089 nvme0n4: ios=4134/4563, merge=0/0, ticks=36507/30983, in_queue=67490, util=98.31% 00:19:50.089 17:56:54 -- target/fio.sh@55 -- # sync 00:19:50.089 17:56:54 -- target/fio.sh@59 -- # fio_pid=1693767 00:19:50.089 17:56:54 -- target/fio.sh@61 -- # sleep 3 00:19:50.089 17:56:54 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:50.089 [global] 00:19:50.089 thread=1 00:19:50.089 invalidate=1 00:19:50.089 rw=read 00:19:50.089 time_based=1 00:19:50.089 runtime=10 00:19:50.089 ioengine=libaio 00:19:50.089 direct=1 00:19:50.089 bs=4096 00:19:50.089 iodepth=1 00:19:50.089 norandommap=1 00:19:50.089 numjobs=1 00:19:50.089 00:19:50.089 [job0] 00:19:50.089 filename=/dev/nvme0n1 00:19:50.089 [job1] 00:19:50.089 filename=/dev/nvme0n2 00:19:50.089 [job2] 00:19:50.089 filename=/dev/nvme0n3 00:19:50.089 [job3] 00:19:50.089 filename=/dev/nvme0n4 00:19:50.089 Could not set queue depth (nvme0n1) 00:19:50.089 Could not set queue depth (nvme0n2) 00:19:50.089 Could not set queue depth (nvme0n3) 00:19:50.089 Could not set queue depth (nvme0n4) 00:19:50.350 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.350 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.350 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.350 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:50.350 fio-3.35 00:19:50.350 Starting 4 threads 00:19:52.892 17:56:57 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:53.152 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=12210176, buflen=4096 00:19:53.152 fio: pid=1693975, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.152 17:56:57 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:53.412 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11984896, buflen=4096 00:19:53.412 fio: pid=1693974, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.412 17:56:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:53.412 17:56:57 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:53.673 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2400256, buflen=4096 00:19:53.673 fio: pid=1693972, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.673 17:56:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:53.673 17:56:57 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:53.933 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=12963840, buflen=4096 00:19:53.933 fio: pid=1693973, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:53.933 17:56:57 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:53.933 17:56:57 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:53.933 00:19:53.933 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1693972: Mon Jul 22 17:56:57 2024 00:19:53.933 read: IOPS=190, BW=762KiB/s (781kB/s)(2344KiB/3075msec) 00:19:53.933 slat (usec): min=6, max=13112, avg=45.74, stdev=540.37 00:19:53.933 clat (usec): min=209, max=45122, avg=5190.09, stdev=12421.51 00:19:53.933 lat (usec): min=216, max=54486, avg=5235.77, stdev=12498.63 00:19:53.933 clat percentiles (usec): 00:19:53.933 | 1.00th=[ 285], 5.00th=[ 562], 10.00th=[ 676], 20.00th=[ 807], 00:19:53.933 | 30.00th=[ 881], 40.00th=[ 938], 50.00th=[ 979], 60.00th=[ 1012], 00:19:53.933 | 70.00th=[ 1074], 80.00th=[ 1156], 90.00th=[40633], 95.00th=[41681], 00:19:53.933 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:19:53.933 | 99.99th=[45351] 00:19:53.933 bw ( KiB/s): min= 224, max= 3400, per=7.61%, avg=897.60, stdev=1399.58, samples=5 00:19:53.933 iops : min= 56, max= 850, avg=224.40, stdev=349.90, samples=5 00:19:53.933 lat (usec) : 250=0.34%, 500=2.21%, 750=12.10%, 1000=41.06% 00:19:53.933 lat (msec) : 2=33.39%, 4=0.17%, 20=0.17%, 50=10.39% 00:19:53.933 cpu : usr=0.20%, sys=0.52%, ctx=591, majf=0, minf=1 00:19:53.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.933 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.933 issued rwts: total=587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.933 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.933 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1693973: Mon Jul 22 17:56:57 2024 00:19:53.933 read: IOPS=965, BW=3862KiB/s (3955kB/s)(12.4MiB/3278msec) 00:19:53.933 slat (usec): min=5, max=21750, avg=37.79, stdev=432.88 00:19:53.933 clat (usec): min=254, max=41604, avg=990.82, stdev=1750.60 00:19:53.933 lat (usec): min=262, max=41629, avg=1028.61, stdev=1803.87 00:19:53.933 clat percentiles (usec): 00:19:53.933 | 1.00th=[ 506], 5.00th=[ 660], 10.00th=[ 742], 20.00th=[ 824], 00:19:53.933 | 30.00th=[ 881], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 963], 00:19:53.933 | 70.00th=[ 979], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1090], 00:19:53.933 | 99.00th=[ 1188], 99.50th=[ 1254], 99.90th=[41157], 99.95th=[41157], 00:19:53.933 | 99.99th=[41681] 00:19:53.933 bw ( KiB/s): min= 3906, max= 4184, per=34.69%, avg=4088.33, stdev=99.83, samples=6 00:19:53.933 iops : min= 976, max= 1046, avg=1022.00, stdev=25.14, samples=6 00:19:53.933 lat (usec) : 500=0.88%, 750=9.73%, 1000=67.02% 00:19:53.933 lat (msec) : 2=22.14%, 50=0.19% 00:19:53.933 cpu : usr=1.43%, sys=4.36%, ctx=3169, majf=0, minf=1 00:19:53.933 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.934 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.934 issued rwts: total=3166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.934 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1693974: Mon Jul 22 17:56:57 2024 00:19:53.934 read: IOPS=1028, BW=4111KiB/s (4210kB/s)(11.4MiB/2847msec) 00:19:53.934 slat (usec): min=6, max=13692, avg=32.72, stdev=302.30 00:19:53.934 clat (usec): min=407, max=2003, avg=933.68, stdev=104.87 00:19:53.934 lat (usec): min=415, max=14689, avg=966.40, stdev=322.10 00:19:53.934 clat percentiles (usec): 00:19:53.934 | 1.00th=[ 611], 5.00th=[ 750], 10.00th=[ 799], 20.00th=[ 857], 00:19:53.934 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 971], 00:19:53.934 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:19:53.934 | 99.00th=[ 1139], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1221], 00:19:53.934 | 99.99th=[ 2008] 00:19:53.934 bw ( KiB/s): min= 4000, max= 4256, per=35.16%, avg=4144.00, stdev=94.49, samples=5 00:19:53.934 iops : min= 1000, max= 1064, avg=1036.00, stdev=23.62, samples=5 00:19:53.934 lat (usec) : 500=0.31%, 750=4.95%, 1000=69.94% 00:19:53.934 lat (msec) : 2=24.74%, 4=0.03% 00:19:53.934 cpu : usr=2.42%, sys=3.27%, ctx=2930, majf=0, minf=1 00:19:53.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.934 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.934 issued rwts: total=2927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.934 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1693975: Mon Jul 22 17:56:57 2024 00:19:53.934 read: IOPS=1129, BW=4515KiB/s (4623kB/s)(11.6MiB/2641msec) 00:19:53.934 slat (nsec): min=6423, max=56691, avg=23321.18, stdev=6596.98 00:19:53.934 clat (usec): min=189, max=41649, avg=856.59, stdev=2715.03 00:19:53.934 lat (usec): min=196, max=41673, avg=879.91, stdev=2715.11 00:19:53.934 clat percentiles (usec): 00:19:53.934 | 1.00th=[ 273], 5.00th=[ 400], 10.00th=[ 515], 20.00th=[ 578], 00:19:53.934 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 660], 60.00th=[ 742], 00:19:53.934 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 824], 95.00th=[ 848], 00:19:53.934 | 99.00th=[ 963], 99.50th=[ 1237], 99.90th=[41157], 99.95th=[41157], 00:19:53.934 | 99.99th=[41681] 00:19:53.934 bw ( KiB/s): min= 128, max= 5864, per=37.84%, avg=4459.20, stdev=2454.80, samples=5 00:19:53.934 iops : min= 32, max= 1466, avg=1114.80, stdev=613.70, samples=5 00:19:53.934 lat (usec) : 250=0.64%, 500=8.05%, 750=52.62%, 1000=37.73% 00:19:53.934 lat (msec) : 2=0.47%, 50=0.47% 00:19:53.934 cpu : usr=1.36%, sys=2.88%, ctx=2982, majf=0, minf=2 00:19:53.934 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.934 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:53.934 issued rwts: total=2982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:53.934 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:53.934 00:19:53.934 Run status group 0 (all jobs): 00:19:53.934 READ: bw=11.5MiB/s (12.1MB/s), 762KiB/s-4515KiB/s (781kB/s-4623kB/s), io=37.7MiB (39.6MB), run=2641-3278msec 00:19:53.934 00:19:53.934 Disk stats (read/write): 00:19:53.934 nvme0n1: ios=578/0, merge=0/0, ticks=2778/0, in_queue=2778, util=94.72% 00:19:53.934 nvme0n2: ios=3160/0, merge=0/0, ticks=2868/0, in_queue=2868, util=94.99% 00:19:53.934 nvme0n3: ios=2687/0, merge=0/0, ticks=2439/0, in_queue=2439, util=96.15% 00:19:53.934 nvme0n4: ios=2920/0, merge=0/0, ticks=2490/0, in_queue=2490, util=96.44% 00:19:53.934 17:56:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:53.934 17:56:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:54.194 17:56:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.194 17:56:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:54.454 17:56:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.454 17:56:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:54.715 17:56:58 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:54.715 17:56:58 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:54.975 17:56:59 -- target/fio.sh@69 -- # fio_status=0 00:19:54.975 17:56:59 -- target/fio.sh@70 -- # wait 1693767 00:19:54.975 17:56:59 -- target/fio.sh@70 -- # fio_status=4 00:19:54.975 17:56:59 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:54.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:54.975 17:56:59 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:54.975 17:56:59 -- common/autotest_common.sh@1198 -- # local i=0 00:19:54.975 17:56:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:54.975 17:56:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.975 17:56:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:54.975 17:56:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.975 17:56:59 -- common/autotest_common.sh@1210 -- # return 0 00:19:54.975 17:56:59 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:54.975 17:56:59 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:54.975 nvmf hotplug test: fio failed as expected 00:19:54.975 17:56:59 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.235 17:56:59 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:55.235 17:56:59 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:55.236 17:56:59 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:55.236 17:56:59 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:55.236 17:56:59 -- target/fio.sh@91 -- # nvmftestfini 00:19:55.236 17:56:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.236 17:56:59 -- nvmf/common.sh@116 -- # sync 00:19:55.236 17:56:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.236 17:56:59 -- nvmf/common.sh@119 -- # set +e 00:19:55.236 17:56:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.236 17:56:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.236 rmmod nvme_tcp 00:19:55.236 rmmod nvme_fabrics 00:19:55.236 rmmod nvme_keyring 00:19:55.236 17:56:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.236 17:56:59 -- nvmf/common.sh@123 -- # set -e 00:19:55.236 17:56:59 -- nvmf/common.sh@124 -- # return 0 00:19:55.236 17:56:59 -- nvmf/common.sh@477 -- # '[' -n 1690590 ']' 00:19:55.236 17:56:59 -- nvmf/common.sh@478 -- # killprocess 1690590 00:19:55.236 17:56:59 -- common/autotest_common.sh@926 -- # '[' -z 1690590 ']' 00:19:55.236 17:56:59 -- common/autotest_common.sh@930 -- # kill -0 1690590 00:19:55.236 17:56:59 -- common/autotest_common.sh@931 -- # uname 00:19:55.236 17:56:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.236 17:56:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1690590 00:19:55.236 17:56:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:55.236 17:56:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:55.236 17:56:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1690590' 00:19:55.236 killing process with pid 1690590 00:19:55.236 17:56:59 -- common/autotest_common.sh@945 -- # kill 1690590 00:19:55.236 17:56:59 -- common/autotest_common.sh@950 -- # wait 1690590 00:19:55.496 17:56:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:55.496 17:56:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:55.496 17:56:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:55.496 17:56:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.496 17:56:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:55.496 17:56:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.496 17:56:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.496 17:56:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.411 17:57:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:57.411 00:19:57.411 real 0m29.600s 00:19:57.411 user 2m9.924s 00:19:57.411 sys 0m9.650s 00:19:57.411 17:57:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.411 17:57:01 -- common/autotest_common.sh@10 -- # set +x 00:19:57.411 ************************************ 00:19:57.411 END TEST nvmf_fio_target 00:19:57.411 ************************************ 00:19:57.411 17:57:01 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:57.411 17:57:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:57.411 17:57:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:57.411 17:57:01 -- common/autotest_common.sh@10 -- # set +x 00:19:57.673 ************************************ 00:19:57.673 START TEST nvmf_bdevio 00:19:57.673 ************************************ 00:19:57.673 17:57:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:57.673 * Looking for test storage... 00:19:57.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.673 17:57:01 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.673 17:57:01 -- nvmf/common.sh@7 -- # uname -s 00:19:57.673 17:57:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.673 17:57:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.673 17:57:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.673 17:57:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.673 17:57:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.673 17:57:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.673 17:57:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.673 17:57:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.673 17:57:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.673 17:57:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.673 17:57:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:57.673 17:57:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:19:57.673 17:57:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.673 17:57:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.673 17:57:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.673 17:57:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.673 17:57:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.673 17:57:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.673 17:57:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.673 17:57:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.673 17:57:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.673 17:57:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.673 17:57:01 -- paths/export.sh@5 -- # export PATH 00:19:57.673 17:57:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.673 17:57:01 -- nvmf/common.sh@46 -- # : 0 00:19:57.673 17:57:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:57.673 17:57:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:57.673 17:57:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:57.673 17:57:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.673 17:57:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.673 17:57:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:57.673 17:57:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:57.673 17:57:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:57.673 17:57:01 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:57.673 17:57:01 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:57.673 17:57:01 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:57.673 17:57:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:57.673 17:57:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.673 17:57:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:57.673 17:57:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:57.673 17:57:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:57.673 17:57:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.673 17:57:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.673 17:57:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.673 17:57:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:57.673 17:57:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:57.673 17:57:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:57.673 17:57:01 -- common/autotest_common.sh@10 -- # set +x 00:20:05.816 17:57:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:05.816 17:57:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:05.816 17:57:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:05.816 17:57:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:05.816 17:57:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:05.816 17:57:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:05.816 17:57:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:05.816 17:57:09 -- nvmf/common.sh@294 -- # net_devs=() 00:20:05.816 17:57:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:05.816 17:57:09 -- nvmf/common.sh@295 -- # e810=() 00:20:05.816 17:57:09 -- nvmf/common.sh@295 -- # local -ga e810 00:20:05.816 17:57:09 -- nvmf/common.sh@296 -- # x722=() 00:20:05.816 17:57:09 -- nvmf/common.sh@296 -- # local -ga x722 00:20:05.816 17:57:09 -- nvmf/common.sh@297 -- # mlx=() 00:20:05.816 17:57:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:05.816 17:57:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.816 17:57:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:05.816 17:57:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:05.816 17:57:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:05.816 17:57:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:05.816 17:57:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:05.816 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:05.816 17:57:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:05.816 17:57:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:05.816 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:05.816 17:57:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:05.816 17:57:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:05.816 17:57:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.816 17:57:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:05.816 17:57:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.816 17:57:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:05.816 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:05.816 17:57:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.816 17:57:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:05.816 17:57:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.816 17:57:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:05.816 17:57:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.816 17:57:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:05.816 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:05.816 17:57:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.816 17:57:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:05.816 17:57:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:05.816 17:57:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:05.816 17:57:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:05.816 17:57:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.816 17:57:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.816 17:57:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.816 17:57:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:05.816 17:57:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.816 17:57:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.816 17:57:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:05.816 17:57:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.816 17:57:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.816 17:57:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:05.816 17:57:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:05.816 17:57:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.816 17:57:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.816 17:57:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.816 17:57:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.816 17:57:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:05.816 17:57:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.078 17:57:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.078 17:57:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.078 17:57:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:06.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.795 ms 00:20:06.078 00:20:06.078 --- 10.0.0.2 ping statistics --- 00:20:06.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.078 rtt min/avg/max/mdev = 0.795/0.795/0.795/0.000 ms 00:20:06.078 17:57:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:20:06.078 00:20:06.078 --- 10.0.0.1 ping statistics --- 00:20:06.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.078 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:20:06.078 17:57:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.078 17:57:10 -- nvmf/common.sh@410 -- # return 0 00:20:06.078 17:57:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.078 17:57:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.078 17:57:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.078 17:57:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.078 17:57:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.078 17:57:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.078 17:57:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.078 17:57:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:06.078 17:57:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.078 17:57:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:06.078 17:57:10 -- common/autotest_common.sh@10 -- # set +x 00:20:06.078 17:57:10 -- nvmf/common.sh@469 -- # nvmfpid=1699537 00:20:06.078 17:57:10 -- nvmf/common.sh@470 -- # waitforlisten 1699537 00:20:06.078 17:57:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:06.078 17:57:10 -- common/autotest_common.sh@819 -- # '[' -z 1699537 ']' 00:20:06.079 17:57:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.079 17:57:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.079 17:57:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.079 17:57:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.079 17:57:10 -- common/autotest_common.sh@10 -- # set +x 00:20:06.079 [2024-07-22 17:57:10.292494] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:06.079 [2024-07-22 17:57:10.292542] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.079 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.411 [2024-07-22 17:57:10.363115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.411 [2024-07-22 17:57:10.423873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:06.411 [2024-07-22 17:57:10.424002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.411 [2024-07-22 17:57:10.424012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.411 [2024-07-22 17:57:10.424020] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.411 [2024-07-22 17:57:10.424156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:06.411 [2024-07-22 17:57:10.424306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:06.411 [2024-07-22 17:57:10.424554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.411 [2024-07-22 17:57:10.424554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:07.080 17:57:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.080 17:57:11 -- common/autotest_common.sh@852 -- # return 0 00:20:07.080 17:57:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:07.080 17:57:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:07.080 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.080 17:57:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.080 17:57:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.080 17:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.080 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.080 [2024-07-22 17:57:11.176815] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.080 17:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.080 17:57:11 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.080 17:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.080 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.080 Malloc0 00:20:07.080 17:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.080 17:57:11 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.080 17:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.080 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.080 17:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.080 17:57:11 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.080 17:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.080 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.080 17:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.080 17:57:11 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.080 17:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.080 17:57:11 -- common/autotest_common.sh@10 -- # set +x 00:20:07.080 [2024-07-22 17:57:11.235551] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.080 17:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.080 17:57:11 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:07.080 17:57:11 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:07.080 17:57:11 -- nvmf/common.sh@520 -- # config=() 00:20:07.080 17:57:11 -- nvmf/common.sh@520 -- # local subsystem config 00:20:07.080 17:57:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:07.080 17:57:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:07.080 { 00:20:07.080 "params": { 00:20:07.080 "name": "Nvme$subsystem", 00:20:07.080 "trtype": "$TEST_TRANSPORT", 00:20:07.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.080 "adrfam": "ipv4", 00:20:07.080 "trsvcid": "$NVMF_PORT", 00:20:07.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.080 "hdgst": ${hdgst:-false}, 00:20:07.080 "ddgst": ${ddgst:-false} 00:20:07.080 }, 00:20:07.080 "method": "bdev_nvme_attach_controller" 00:20:07.080 } 00:20:07.080 EOF 00:20:07.080 )") 00:20:07.080 17:57:11 -- nvmf/common.sh@542 -- # cat 00:20:07.080 17:57:11 -- nvmf/common.sh@544 -- # jq . 00:20:07.080 17:57:11 -- nvmf/common.sh@545 -- # IFS=, 00:20:07.080 17:57:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:07.080 "params": { 00:20:07.080 "name": "Nvme1", 00:20:07.080 "trtype": "tcp", 00:20:07.080 "traddr": "10.0.0.2", 00:20:07.080 "adrfam": "ipv4", 00:20:07.080 "trsvcid": "4420", 00:20:07.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.080 "hdgst": false, 00:20:07.080 "ddgst": false 00:20:07.080 }, 00:20:07.080 "method": "bdev_nvme_attach_controller" 00:20:07.080 }' 00:20:07.080 [2024-07-22 17:57:11.283715] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:07.080 [2024-07-22 17:57:11.283763] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700006 ] 00:20:07.080 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.341 [2024-07-22 17:57:11.363802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.341 [2024-07-22 17:57:11.424738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.341 [2024-07-22 17:57:11.424869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.341 [2024-07-22 17:57:11.424871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.341 [2024-07-22 17:57:11.564773] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:07.341 [2024-07-22 17:57:11.564803] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:07.341 I/O targets: 00:20:07.341 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:07.341 00:20:07.341 00:20:07.341 CUnit - A unit testing framework for C - Version 2.1-3 00:20:07.341 http://cunit.sourceforge.net/ 00:20:07.341 00:20:07.341 00:20:07.341 Suite: bdevio tests on: Nvme1n1 00:20:07.602 Test: blockdev write read block ...passed 00:20:07.602 Test: blockdev write zeroes read block ...passed 00:20:07.602 Test: blockdev write zeroes read no split ...passed 00:20:07.602 Test: blockdev write zeroes read split ...passed 00:20:07.602 Test: blockdev write zeroes read split partial ...passed 00:20:07.602 Test: blockdev reset ...[2024-07-22 17:57:11.720661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.602 [2024-07-22 17:57:11.720717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63ef0 (9): Bad file descriptor 00:20:07.602 [2024-07-22 17:57:11.735627] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.602 passed 00:20:07.602 Test: blockdev write read 8 blocks ...passed 00:20:07.602 Test: blockdev write read size > 128k ...passed 00:20:07.602 Test: blockdev write read invalid size ...passed 00:20:07.602 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:07.602 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:07.602 Test: blockdev write read max offset ...passed 00:20:07.602 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:07.863 Test: blockdev writev readv 8 blocks ...passed 00:20:07.863 Test: blockdev writev readv 30 x 1block ...passed 00:20:07.863 Test: blockdev writev readv block ...passed 00:20:07.863 Test: blockdev writev readv size > 128k ...passed 00:20:07.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:07.863 Test: blockdev comparev and writev ...[2024-07-22 17:57:11.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.863 [2024-07-22 17:57:11.999614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.863 [2024-07-22 17:57:11.999625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.863 [2024-07-22 17:57:11.999630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:07.863 [2024-07-22 17:57:12.000297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.863 [2024-07-22 17:57:12.000305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:07.863 [2024-07-22 17:57:12.000314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.864 [2024-07-22 17:57:12.000320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.000958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.864 [2024-07-22 17:57:12.000967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.000976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.864 [2024-07-22 17:57:12.000981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.001640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.864 [2024-07-22 17:57:12.001647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.001660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:07.864 [2024-07-22 17:57:12.001666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:07.864 passed 00:20:07.864 Test: blockdev nvme passthru rw ...passed 00:20:07.864 Test: blockdev nvme passthru vendor specific ...[2024-07-22 17:57:12.085934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.864 [2024-07-22 17:57:12.085946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.086293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.864 [2024-07-22 17:57:12.086300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.086686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.864 [2024-07-22 17:57:12.086694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:07.864 [2024-07-22 17:57:12.087086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:07.864 [2024-07-22 17:57:12.087093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:07.864 passed 00:20:07.864 Test: blockdev nvme admin passthru ...passed 00:20:08.125 Test: blockdev copy ...passed 00:20:08.125 00:20:08.125 Run Summary: Type Total Ran Passed Failed Inactive 00:20:08.125 suites 1 1 n/a 0 0 00:20:08.125 tests 23 23 23 0 0 00:20:08.125 asserts 152 152 152 0 n/a 00:20:08.125 00:20:08.125 Elapsed time = 1.158 seconds 00:20:08.125 17:57:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:08.125 17:57:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.125 17:57:12 -- common/autotest_common.sh@10 -- # set +x 00:20:08.125 17:57:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.125 17:57:12 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:08.125 17:57:12 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:08.125 17:57:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.125 17:57:12 -- nvmf/common.sh@116 -- # sync 00:20:08.125 17:57:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:08.125 17:57:12 -- nvmf/common.sh@119 -- # set +e 00:20:08.125 17:57:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.125 17:57:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:08.125 rmmod nvme_tcp 00:20:08.125 rmmod nvme_fabrics 00:20:08.125 rmmod nvme_keyring 00:20:08.125 17:57:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.125 17:57:12 -- nvmf/common.sh@123 -- # set -e 00:20:08.125 17:57:12 -- nvmf/common.sh@124 -- # return 0 00:20:08.125 17:57:12 -- nvmf/common.sh@477 -- # '[' -n 1699537 ']' 00:20:08.125 17:57:12 -- nvmf/common.sh@478 -- # killprocess 1699537 00:20:08.125 17:57:12 -- common/autotest_common.sh@926 -- # '[' -z 1699537 ']' 00:20:08.125 17:57:12 -- common/autotest_common.sh@930 -- # kill -0 1699537 00:20:08.125 17:57:12 -- common/autotest_common.sh@931 -- # uname 00:20:08.125 17:57:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:08.125 17:57:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1699537 00:20:08.387 17:57:12 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:08.387 17:57:12 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:08.387 17:57:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1699537' 00:20:08.387 killing process with pid 1699537 00:20:08.387 17:57:12 -- common/autotest_common.sh@945 -- # kill 1699537 00:20:08.387 17:57:12 -- common/autotest_common.sh@950 -- # wait 1699537 00:20:08.387 17:57:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:08.387 17:57:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:08.387 17:57:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:08.387 17:57:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.387 17:57:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:08.387 17:57:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.387 17:57:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.387 17:57:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.933 17:57:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:10.933 00:20:10.933 real 0m12.932s 00:20:10.933 user 0m12.814s 00:20:10.933 sys 0m6.689s 00:20:10.933 17:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.933 17:57:14 -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 ************************************ 00:20:10.933 END TEST nvmf_bdevio 00:20:10.933 ************************************ 00:20:10.933 17:57:14 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:20:10.933 17:57:14 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:10.933 17:57:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:10.933 17:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:10.933 17:57:14 -- common/autotest_common.sh@10 -- # set +x 00:20:10.933 ************************************ 00:20:10.933 START TEST nvmf_bdevio_no_huge 00:20:10.933 ************************************ 00:20:10.933 17:57:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:10.933 * Looking for test storage... 00:20:10.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:10.933 17:57:14 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.933 17:57:14 -- nvmf/common.sh@7 -- # uname -s 00:20:10.933 17:57:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.933 17:57:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.933 17:57:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.933 17:57:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.933 17:57:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.934 17:57:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.934 17:57:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.934 17:57:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.934 17:57:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.934 17:57:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.934 17:57:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:10.934 17:57:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:10.934 17:57:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.934 17:57:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.934 17:57:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.934 17:57:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.934 17:57:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.934 17:57:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.934 17:57:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.934 17:57:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.934 17:57:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.934 17:57:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.934 17:57:14 -- paths/export.sh@5 -- # export PATH 00:20:10.934 17:57:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.934 17:57:14 -- nvmf/common.sh@46 -- # : 0 00:20:10.934 17:57:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:10.934 17:57:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:10.934 17:57:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:10.934 17:57:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.934 17:57:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.934 17:57:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:10.934 17:57:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:10.934 17:57:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:10.934 17:57:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.934 17:57:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.934 17:57:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:10.934 17:57:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:10.934 17:57:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.934 17:57:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:10.934 17:57:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:10.934 17:57:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:10.934 17:57:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.934 17:57:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.934 17:57:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.934 17:57:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:10.934 17:57:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:10.934 17:57:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:10.934 17:57:14 -- common/autotest_common.sh@10 -- # set +x 00:20:19.076 17:57:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:19.076 17:57:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:19.076 17:57:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:19.076 17:57:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:19.076 17:57:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:19.076 17:57:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:19.076 17:57:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:19.076 17:57:22 -- nvmf/common.sh@294 -- # net_devs=() 00:20:19.076 17:57:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:19.076 17:57:22 -- nvmf/common.sh@295 -- # e810=() 00:20:19.076 17:57:22 -- nvmf/common.sh@295 -- # local -ga e810 00:20:19.076 17:57:22 -- nvmf/common.sh@296 -- # x722=() 00:20:19.076 17:57:22 -- nvmf/common.sh@296 -- # local -ga x722 00:20:19.076 17:57:22 -- nvmf/common.sh@297 -- # mlx=() 00:20:19.076 17:57:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:19.076 17:57:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.076 17:57:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:19.076 17:57:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:19.076 17:57:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:19.076 17:57:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:19.076 17:57:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:19.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:19.076 17:57:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:19.076 17:57:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:19.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:19.076 17:57:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:19.076 17:57:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:19.076 17:57:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.076 17:57:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:19.076 17:57:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.076 17:57:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:19.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:19.076 17:57:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.076 17:57:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:19.076 17:57:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.076 17:57:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:19.076 17:57:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.076 17:57:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:19.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:19.076 17:57:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.076 17:57:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:19.076 17:57:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:19.076 17:57:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:19.076 17:57:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:19.076 17:57:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.076 17:57:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.076 17:57:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.076 17:57:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:19.076 17:57:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.076 17:57:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.076 17:57:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:19.076 17:57:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.076 17:57:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.076 17:57:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:19.076 17:57:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:19.076 17:57:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.076 17:57:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.076 17:57:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.076 17:57:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.076 17:57:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:19.076 17:57:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.076 17:57:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.076 17:57:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.076 17:57:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:19.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:20:19.076 00:20:19.076 --- 10.0.0.2 ping statistics --- 00:20:19.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.076 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:20:19.076 17:57:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:20:19.076 00:20:19.076 --- 10.0.0.1 ping statistics --- 00:20:19.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.077 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:20:19.077 17:57:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.077 17:57:22 -- nvmf/common.sh@410 -- # return 0 00:20:19.077 17:57:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:19.077 17:57:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.077 17:57:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:19.077 17:57:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:19.077 17:57:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.077 17:57:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:19.077 17:57:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:19.077 17:57:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:19.077 17:57:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:19.077 17:57:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:19.077 17:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:19.077 17:57:22 -- nvmf/common.sh@469 -- # nvmfpid=1704549 00:20:19.077 17:57:22 -- nvmf/common.sh@470 -- # waitforlisten 1704549 00:20:19.077 17:57:22 -- common/autotest_common.sh@819 -- # '[' -z 1704549 ']' 00:20:19.077 17:57:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.077 17:57:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.077 17:57:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.077 17:57:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.077 17:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:19.077 17:57:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:19.077 [2024-07-22 17:57:22.793074] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:19.077 [2024-07-22 17:57:22.793159] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:19.077 [2024-07-22 17:57:22.877086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.077 [2024-07-22 17:57:22.964381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.077 [2024-07-22 17:57:22.964502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.077 [2024-07-22 17:57:22.964511] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.077 [2024-07-22 17:57:22.964519] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.077 [2024-07-22 17:57:22.964655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:19.077 [2024-07-22 17:57:22.964801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:19.077 [2024-07-22 17:57:22.964914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.077 [2024-07-22 17:57:22.964915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:19.648 17:57:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.648 17:57:23 -- common/autotest_common.sh@852 -- # return 0 00:20:19.648 17:57:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.648 17:57:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:19.648 17:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:19.648 17:57:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.648 17:57:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.648 17:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.648 17:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:19.648 [2024-07-22 17:57:23.661809] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.648 17:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.648 17:57:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:19.648 17:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.648 17:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:19.648 Malloc0 00:20:19.648 17:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.648 17:57:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:19.648 17:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.648 17:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:19.648 17:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.648 17:57:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.648 17:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.648 17:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:19.648 17:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.648 17:57:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.648 17:57:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:19.648 17:57:23 -- common/autotest_common.sh@10 -- # set +x 00:20:19.648 [2024-07-22 17:57:23.714105] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.648 17:57:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:19.648 17:57:23 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:19.648 17:57:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:19.648 17:57:23 -- nvmf/common.sh@520 -- # config=() 00:20:19.648 17:57:23 -- nvmf/common.sh@520 -- # local subsystem config 00:20:19.648 17:57:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:19.648 17:57:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:19.648 { 00:20:19.648 "params": { 00:20:19.648 "name": "Nvme$subsystem", 00:20:19.648 "trtype": "$TEST_TRANSPORT", 00:20:19.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.648 "adrfam": "ipv4", 00:20:19.648 "trsvcid": "$NVMF_PORT", 00:20:19.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.648 "hdgst": ${hdgst:-false}, 00:20:19.648 "ddgst": ${ddgst:-false} 00:20:19.648 }, 00:20:19.648 "method": "bdev_nvme_attach_controller" 00:20:19.648 } 00:20:19.648 EOF 00:20:19.648 )") 00:20:19.648 17:57:23 -- nvmf/common.sh@542 -- # cat 00:20:19.648 17:57:23 -- nvmf/common.sh@544 -- # jq . 00:20:19.648 17:57:23 -- nvmf/common.sh@545 -- # IFS=, 00:20:19.648 17:57:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:19.648 "params": { 00:20:19.648 "name": "Nvme1", 00:20:19.649 "trtype": "tcp", 00:20:19.649 "traddr": "10.0.0.2", 00:20:19.649 "adrfam": "ipv4", 00:20:19.649 "trsvcid": "4420", 00:20:19.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.649 "hdgst": false, 00:20:19.649 "ddgst": false 00:20:19.649 }, 00:20:19.649 "method": "bdev_nvme_attach_controller" 00:20:19.649 }' 00:20:19.649 [2024-07-22 17:57:23.763683] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:19.649 [2024-07-22 17:57:23.763732] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1704604 ] 00:20:19.649 [2024-07-22 17:57:23.845329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:19.910 [2024-07-22 17:57:23.929312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.910 [2024-07-22 17:57:23.929452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.910 [2024-07-22 17:57:23.929569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.910 [2024-07-22 17:57:24.107447] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:19.910 [2024-07-22 17:57:24.107474] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:19.910 I/O targets: 00:20:19.910 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:19.910 00:20:19.910 00:20:19.910 CUnit - A unit testing framework for C - Version 2.1-3 00:20:19.910 http://cunit.sourceforge.net/ 00:20:19.910 00:20:19.910 00:20:19.910 Suite: bdevio tests on: Nvme1n1 00:20:19.910 Test: blockdev write read block ...passed 00:20:20.170 Test: blockdev write zeroes read block ...passed 00:20:20.170 Test: blockdev write zeroes read no split ...passed 00:20:20.170 Test: blockdev write zeroes read split ...passed 00:20:20.170 Test: blockdev write zeroes read split partial ...passed 00:20:20.170 Test: blockdev reset ...[2024-07-22 17:57:24.268339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.170 [2024-07-22 17:57:24.268400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2c480 (9): Bad file descriptor 00:20:20.170 [2024-07-22 17:57:24.286844] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:20.170 passed 00:20:20.170 Test: blockdev write read 8 blocks ...passed 00:20:20.170 Test: blockdev write read size > 128k ...passed 00:20:20.170 Test: blockdev write read invalid size ...passed 00:20:20.170 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:20.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:20.171 Test: blockdev write read max offset ...passed 00:20:20.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:20.431 Test: blockdev writev readv 8 blocks ...passed 00:20:20.431 Test: blockdev writev readv 30 x 1block ...passed 00:20:20.431 Test: blockdev writev readv block ...passed 00:20:20.431 Test: blockdev writev readv size > 128k ...passed 00:20:20.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:20.431 Test: blockdev comparev and writev ...[2024-07-22 17:57:24.514577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.514599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.514610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.514616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.515341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.515351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.515361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.515367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.516041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.516048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.516057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.516063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.516774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.516781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.516791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:20.431 [2024-07-22 17:57:24.516796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:20.431 passed 00:20:20.431 Test: blockdev nvme passthru rw ...passed 00:20:20.431 Test: blockdev nvme passthru vendor specific ...[2024-07-22 17:57:24.601098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.431 [2024-07-22 17:57:24.601113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.601524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.431 [2024-07-22 17:57:24.601531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.601917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.431 [2024-07-22 17:57:24.601924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:20.431 [2024-07-22 17:57:24.602314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:20.431 [2024-07-22 17:57:24.602321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:20.431 passed 00:20:20.431 Test: blockdev nvme admin passthru ...passed 00:20:20.431 Test: blockdev copy ...passed 00:20:20.431 00:20:20.431 Run Summary: Type Total Ran Passed Failed Inactive 00:20:20.431 suites 1 1 n/a 0 0 00:20:20.431 tests 23 23 23 0 0 00:20:20.431 asserts 152 152 152 0 n/a 00:20:20.431 00:20:20.431 Elapsed time = 1.109 seconds 00:20:20.692 17:57:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.692 17:57:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:20.692 17:57:24 -- common/autotest_common.sh@10 -- # set +x 00:20:20.692 17:57:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:20.692 17:57:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:20.692 17:57:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:20.692 17:57:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:20.692 17:57:24 -- nvmf/common.sh@116 -- # sync 00:20:20.692 17:57:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:20.692 17:57:24 -- nvmf/common.sh@119 -- # set +e 00:20:20.692 17:57:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:20.692 17:57:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:20.692 rmmod nvme_tcp 00:20:20.692 rmmod nvme_fabrics 00:20:20.692 rmmod nvme_keyring 00:20:20.692 17:57:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:20.952 17:57:24 -- nvmf/common.sh@123 -- # set -e 00:20:20.952 17:57:24 -- nvmf/common.sh@124 -- # return 0 00:20:20.952 17:57:24 -- nvmf/common.sh@477 -- # '[' -n 1704549 ']' 00:20:20.952 17:57:24 -- nvmf/common.sh@478 -- # killprocess 1704549 00:20:20.952 17:57:24 -- common/autotest_common.sh@926 -- # '[' -z 1704549 ']' 00:20:20.952 17:57:24 -- common/autotest_common.sh@930 -- # kill -0 1704549 00:20:20.952 17:57:24 -- common/autotest_common.sh@931 -- # uname 00:20:20.952 17:57:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:20.952 17:57:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1704549 00:20:20.952 17:57:25 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:20.952 17:57:25 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:20.953 17:57:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1704549' 00:20:20.953 killing process with pid 1704549 00:20:20.953 17:57:25 -- common/autotest_common.sh@945 -- # kill 1704549 00:20:20.953 17:57:25 -- common/autotest_common.sh@950 -- # wait 1704549 00:20:21.214 17:57:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:21.214 17:57:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:21.214 17:57:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:21.214 17:57:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.214 17:57:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:21.214 17:57:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.214 17:57:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.214 17:57:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.127 17:57:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:23.127 00:20:23.127 real 0m12.707s 00:20:23.127 user 0m13.395s 00:20:23.127 sys 0m6.776s 00:20:23.127 17:57:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:23.127 17:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.127 ************************************ 00:20:23.127 END TEST nvmf_bdevio_no_huge 00:20:23.127 ************************************ 00:20:23.388 17:57:27 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.388 17:57:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:23.388 17:57:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:23.388 17:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:23.388 ************************************ 00:20:23.388 START TEST nvmf_tls 00:20:23.388 ************************************ 00:20:23.388 17:57:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:23.388 * Looking for test storage... 00:20:23.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.388 17:57:27 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.388 17:57:27 -- nvmf/common.sh@7 -- # uname -s 00:20:23.388 17:57:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.388 17:57:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.388 17:57:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.388 17:57:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.388 17:57:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.388 17:57:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.388 17:57:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.388 17:57:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.388 17:57:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.388 17:57:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.388 17:57:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:23.388 17:57:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:20:23.388 17:57:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.388 17:57:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.388 17:57:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.388 17:57:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.388 17:57:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.388 17:57:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.388 17:57:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.388 17:57:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.388 17:57:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.388 17:57:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.388 17:57:27 -- paths/export.sh@5 -- # export PATH 00:20:23.388 17:57:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.388 17:57:27 -- nvmf/common.sh@46 -- # : 0 00:20:23.388 17:57:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.388 17:57:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.388 17:57:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.388 17:57:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.388 17:57:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.388 17:57:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.388 17:57:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.388 17:57:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.388 17:57:27 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:23.388 17:57:27 -- target/tls.sh@71 -- # nvmftestinit 00:20:23.388 17:57:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:23.389 17:57:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.389 17:57:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:23.389 17:57:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:23.389 17:57:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:23.389 17:57:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.389 17:57:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.389 17:57:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.389 17:57:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:23.389 17:57:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:23.389 17:57:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:23.389 17:57:27 -- common/autotest_common.sh@10 -- # set +x 00:20:31.525 17:57:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:31.525 17:57:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:31.525 17:57:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:31.525 17:57:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:31.525 17:57:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:31.525 17:57:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:31.525 17:57:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:31.525 17:57:35 -- nvmf/common.sh@294 -- # net_devs=() 00:20:31.525 17:57:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:31.525 17:57:35 -- nvmf/common.sh@295 -- # e810=() 00:20:31.525 17:57:35 -- nvmf/common.sh@295 -- # local -ga e810 00:20:31.525 17:57:35 -- nvmf/common.sh@296 -- # x722=() 00:20:31.525 17:57:35 -- nvmf/common.sh@296 -- # local -ga x722 00:20:31.525 17:57:35 -- nvmf/common.sh@297 -- # mlx=() 00:20:31.525 17:57:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:31.525 17:57:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.525 17:57:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:31.525 17:57:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:31.525 17:57:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:31.525 17:57:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:31.525 17:57:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:31.525 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:31.525 17:57:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:31.525 17:57:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:31.525 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:31.525 17:57:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:31.525 17:57:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:31.525 17:57:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.525 17:57:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:31.525 17:57:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.525 17:57:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:31.525 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:31.525 17:57:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.525 17:57:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:31.525 17:57:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.525 17:57:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:31.525 17:57:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.525 17:57:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:31.525 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:31.525 17:57:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.525 17:57:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:31.525 17:57:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:31.525 17:57:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:31.525 17:57:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:31.525 17:57:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.525 17:57:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.525 17:57:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.525 17:57:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:31.525 17:57:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.525 17:57:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.525 17:57:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:31.525 17:57:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.526 17:57:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.526 17:57:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:31.526 17:57:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:31.526 17:57:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.526 17:57:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.786 17:57:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.786 17:57:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.786 17:57:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:31.786 17:57:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.786 17:57:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.786 17:57:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.786 17:57:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:31.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:20:31.786 00:20:31.786 --- 10.0.0.2 ping statistics --- 00:20:31.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.786 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:20:31.786 17:57:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:20:31.786 00:20:31.786 --- 10.0.0.1 ping statistics --- 00:20:31.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.786 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:20:31.786 17:57:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.786 17:57:35 -- nvmf/common.sh@410 -- # return 0 00:20:31.786 17:57:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:31.786 17:57:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.786 17:57:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:31.786 17:57:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:31.786 17:57:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.786 17:57:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:31.786 17:57:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:31.786 17:57:36 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:31.786 17:57:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:31.786 17:57:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:31.786 17:57:36 -- common/autotest_common.sh@10 -- # set +x 00:20:31.786 17:57:36 -- nvmf/common.sh@469 -- # nvmfpid=1709397 00:20:31.786 17:57:36 -- nvmf/common.sh@470 -- # waitforlisten 1709397 00:20:31.786 17:57:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:31.786 17:57:36 -- common/autotest_common.sh@819 -- # '[' -z 1709397 ']' 00:20:31.786 17:57:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.786 17:57:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:31.786 17:57:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.787 17:57:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:31.787 17:57:36 -- common/autotest_common.sh@10 -- # set +x 00:20:32.047 [2024-07-22 17:57:36.067116] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:32.047 [2024-07-22 17:57:36.067180] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.047 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.047 [2024-07-22 17:57:36.145336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.047 [2024-07-22 17:57:36.212455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:32.047 [2024-07-22 17:57:36.212578] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.047 [2024-07-22 17:57:36.212588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.047 [2024-07-22 17:57:36.212595] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.047 [2024-07-22 17:57:36.212617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.988 17:57:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:32.988 17:57:36 -- common/autotest_common.sh@852 -- # return 0 00:20:32.988 17:57:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:32.988 17:57:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:32.988 17:57:36 -- common/autotest_common.sh@10 -- # set +x 00:20:32.988 17:57:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.988 17:57:36 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:32.988 17:57:36 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:32.988 true 00:20:32.988 17:57:37 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:32.988 17:57:37 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:33.249 17:57:37 -- target/tls.sh@82 -- # version=0 00:20:33.249 17:57:37 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:33.249 17:57:37 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:33.249 17:57:37 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:33.249 17:57:37 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:33.509 17:57:37 -- target/tls.sh@90 -- # version=13 00:20:33.509 17:57:37 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:33.509 17:57:37 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:33.769 17:57:37 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:33.769 17:57:37 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:34.030 17:57:38 -- target/tls.sh@98 -- # version=7 00:20:34.030 17:57:38 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:34.030 17:57:38 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.030 17:57:38 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:34.030 17:57:38 -- target/tls.sh@105 -- # ktls=false 00:20:34.030 17:57:38 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:34.030 17:57:38 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:34.291 17:57:38 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.291 17:57:38 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:34.551 17:57:38 -- target/tls.sh@113 -- # ktls=true 00:20:34.551 17:57:38 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:34.551 17:57:38 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:34.551 17:57:38 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.551 17:57:38 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:34.811 17:57:38 -- target/tls.sh@121 -- # ktls=false 00:20:34.811 17:57:38 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:34.811 17:57:38 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:34.811 17:57:38 -- target/tls.sh@49 -- # local key hash crc 00:20:34.811 17:57:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:34.811 17:57:38 -- target/tls.sh@51 -- # hash=01 00:20:34.811 17:57:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:34.811 17:57:38 -- target/tls.sh@52 -- # gzip -1 -c 00:20:34.811 17:57:38 -- target/tls.sh@52 -- # tail -c8 00:20:34.811 17:57:38 -- target/tls.sh@52 -- # head -c 4 00:20:34.811 17:57:38 -- target/tls.sh@52 -- # crc='p$H�' 00:20:34.811 17:57:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:34.811 17:57:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:34.812 17:57:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:34.812 17:57:38 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:34.812 17:57:38 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:34.812 17:57:38 -- target/tls.sh@49 -- # local key hash crc 00:20:34.812 17:57:38 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:34.812 17:57:38 -- target/tls.sh@51 -- # hash=01 00:20:34.812 17:57:38 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:34.812 17:57:38 -- target/tls.sh@52 -- # gzip -1 -c 00:20:34.812 17:57:38 -- target/tls.sh@52 -- # tail -c8 00:20:34.812 17:57:38 -- target/tls.sh@52 -- # head -c 4 00:20:34.812 17:57:38 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:34.812 17:57:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:34.812 17:57:38 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:34.812 17:57:39 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:34.812 17:57:39 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:34.812 17:57:39 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:34.812 17:57:39 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:34.812 17:57:39 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:34.812 17:57:39 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:34.812 17:57:39 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:34.812 17:57:39 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:34.812 17:57:39 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:35.073 17:57:39 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:35.334 17:57:39 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:35.334 17:57:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:35.334 17:57:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:35.334 [2024-07-22 17:57:39.603663] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.594 17:57:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:35.594 17:57:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:35.854 [2024-07-22 17:57:39.940512] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:35.854 [2024-07-22 17:57:39.940704] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.854 17:57:39 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:35.854 malloc0 00:20:35.854 17:57:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:36.114 17:57:40 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.374 17:57:40 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:36.374 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.462 Initializing NVMe Controllers 00:20:46.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.462 Initialization complete. Launching workers. 00:20:46.462 ======================================================== 00:20:46.462 Latency(us) 00:20:46.462 Device Information : IOPS MiB/s Average min max 00:20:46.462 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15053.63 58.80 4251.97 1007.71 5135.05 00:20:46.462 ======================================================== 00:20:46.462 Total : 15053.63 58.80 4251.97 1007.71 5135.05 00:20:46.462 00:20:46.462 17:57:50 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:46.462 17:57:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:46.462 17:57:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:46.462 17:57:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:46.462 17:57:50 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:46.462 17:57:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:46.462 17:57:50 -- target/tls.sh@28 -- # bdevperf_pid=1711765 00:20:46.462 17:57:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:46.462 17:57:50 -- target/tls.sh@31 -- # waitforlisten 1711765 /var/tmp/bdevperf.sock 00:20:46.463 17:57:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:46.463 17:57:50 -- common/autotest_common.sh@819 -- # '[' -z 1711765 ']' 00:20:46.463 17:57:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.463 17:57:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.463 17:57:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.463 17:57:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.463 17:57:50 -- common/autotest_common.sh@10 -- # set +x 00:20:46.463 [2024-07-22 17:57:50.544685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:46.463 [2024-07-22 17:57:50.544740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711765 ] 00:20:46.463 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.463 [2024-07-22 17:57:50.599628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.463 [2024-07-22 17:57:50.651209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.404 17:57:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:47.404 17:57:51 -- common/autotest_common.sh@852 -- # return 0 00:20:47.404 17:57:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:47.404 [2024-07-22 17:57:51.472271] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.404 TLSTESTn1 00:20:47.404 17:57:51 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.404 Running I/O for 10 seconds... 00:20:57.436 00:20:57.436 Latency(us) 00:20:57.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.436 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.436 Verification LBA range: start 0x0 length 0x2000 00:20:57.436 TLSTESTn1 : 10.02 4743.76 18.53 0.00 0.00 26957.34 4789.17 88322.36 00:20:57.436 =================================================================================================================== 00:20:57.436 Total : 4743.76 18.53 0.00 0.00 26957.34 4789.17 88322.36 00:20:57.436 0 00:20:57.436 17:58:01 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.436 17:58:01 -- target/tls.sh@45 -- # killprocess 1711765 00:20:57.436 17:58:01 -- common/autotest_common.sh@926 -- # '[' -z 1711765 ']' 00:20:57.436 17:58:01 -- common/autotest_common.sh@930 -- # kill -0 1711765 00:20:57.436 17:58:01 -- common/autotest_common.sh@931 -- # uname 00:20:57.436 17:58:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.436 17:58:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1711765 00:20:57.697 17:58:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:57.697 17:58:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:57.697 17:58:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1711765' 00:20:57.697 killing process with pid 1711765 00:20:57.697 17:58:01 -- common/autotest_common.sh@945 -- # kill 1711765 00:20:57.697 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.697 00:20:57.697 Latency(us) 00:20:57.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.697 =================================================================================================================== 00:20:57.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.697 17:58:01 -- common/autotest_common.sh@950 -- # wait 1711765 00:20:57.697 17:58:01 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:57.697 17:58:01 -- common/autotest_common.sh@640 -- # local es=0 00:20:57.697 17:58:01 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:57.697 17:58:01 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:57.697 17:58:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:57.697 17:58:01 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:57.697 17:58:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:57.697 17:58:01 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:57.697 17:58:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:57.697 17:58:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:57.697 17:58:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:57.697 17:58:01 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:57.697 17:58:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:57.697 17:58:01 -- target/tls.sh@28 -- # bdevperf_pid=1713717 00:20:57.697 17:58:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.697 17:58:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.697 17:58:01 -- target/tls.sh@31 -- # waitforlisten 1713717 /var/tmp/bdevperf.sock 00:20:57.697 17:58:01 -- common/autotest_common.sh@819 -- # '[' -z 1713717 ']' 00:20:57.697 17:58:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.697 17:58:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:57.697 17:58:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.697 17:58:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:57.697 17:58:01 -- common/autotest_common.sh@10 -- # set +x 00:20:57.697 [2024-07-22 17:58:01.889399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:57.697 [2024-07-22 17:58:01.889452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713717 ] 00:20:57.697 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.697 [2024-07-22 17:58:01.942986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.957 [2024-07-22 17:58:01.993470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.527 17:58:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:58.527 17:58:02 -- common/autotest_common.sh@852 -- # return 0 00:20:58.527 17:58:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:58.788 [2024-07-22 17:58:02.845590] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.788 [2024-07-22 17:58:02.850000] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:58.788 [2024-07-22 17:58:02.850592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164f970 (107): Transport endpoint is not connected 00:20:58.788 [2024-07-22 17:58:02.851587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164f970 (9): Bad file descriptor 00:20:58.788 [2024-07-22 17:58:02.852588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:58.788 [2024-07-22 17:58:02.852595] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:58.788 [2024-07-22 17:58:02.852600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:58.788 request: 00:20:58.788 { 00:20:58.788 "name": "TLSTEST", 00:20:58.788 "trtype": "tcp", 00:20:58.788 "traddr": "10.0.0.2", 00:20:58.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.788 "adrfam": "ipv4", 00:20:58.788 "trsvcid": "4420", 00:20:58.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.788 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:58.788 "method": "bdev_nvme_attach_controller", 00:20:58.788 "req_id": 1 00:20:58.788 } 00:20:58.788 Got JSON-RPC error response 00:20:58.788 response: 00:20:58.788 { 00:20:58.788 "code": -32602, 00:20:58.788 "message": "Invalid parameters" 00:20:58.788 } 00:20:58.788 17:58:02 -- target/tls.sh@36 -- # killprocess 1713717 00:20:58.788 17:58:02 -- common/autotest_common.sh@926 -- # '[' -z 1713717 ']' 00:20:58.788 17:58:02 -- common/autotest_common.sh@930 -- # kill -0 1713717 00:20:58.788 17:58:02 -- common/autotest_common.sh@931 -- # uname 00:20:58.788 17:58:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.788 17:58:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1713717 00:20:58.788 17:58:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:58.788 17:58:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:58.788 17:58:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1713717' 00:20:58.788 killing process with pid 1713717 00:20:58.788 17:58:02 -- common/autotest_common.sh@945 -- # kill 1713717 00:20:58.788 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.788 00:20:58.788 Latency(us) 00:20:58.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.788 =================================================================================================================== 00:20:58.788 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.788 17:58:02 -- common/autotest_common.sh@950 -- # wait 1713717 00:20:58.788 17:58:03 -- target/tls.sh@37 -- # return 1 00:20:58.788 17:58:03 -- common/autotest_common.sh@643 -- # es=1 00:20:58.788 17:58:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:58.788 17:58:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:58.788 17:58:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:58.788 17:58:03 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:58.788 17:58:03 -- common/autotest_common.sh@640 -- # local es=0 00:20:58.788 17:58:03 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:58.788 17:58:03 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:58.788 17:58:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:58.788 17:58:03 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:58.788 17:58:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:58.788 17:58:03 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:58.788 17:58:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.788 17:58:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:58.788 17:58:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:58.788 17:58:03 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:58.788 17:58:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.788 17:58:03 -- target/tls.sh@28 -- # bdevperf_pid=1713809 00:20:58.788 17:58:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.788 17:58:03 -- target/tls.sh@31 -- # waitforlisten 1713809 /var/tmp/bdevperf.sock 00:20:58.788 17:58:03 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.788 17:58:03 -- common/autotest_common.sh@819 -- # '[' -z 1713809 ']' 00:20:58.788 17:58:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.788 17:58:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:58.788 17:58:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.788 17:58:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:58.788 17:58:03 -- common/autotest_common.sh@10 -- # set +x 00:20:59.049 [2024-07-22 17:58:03.070354] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:59.049 [2024-07-22 17:58:03.070405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713809 ] 00:20:59.049 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.049 [2024-07-22 17:58:03.125321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.049 [2024-07-22 17:58:03.176065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.619 17:58:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:59.619 17:58:03 -- common/autotest_common.sh@852 -- # return 0 00:20:59.619 17:58:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:59.880 [2024-07-22 17:58:03.996248] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.880 [2024-07-22 17:58:04.002259] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:59.880 [2024-07-22 17:58:04.002280] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:59.880 [2024-07-22 17:58:04.002303] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:59.880 [2024-07-22 17:58:04.002355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7d970 (107): Transport endpoint is not connected 00:20:59.880 [2024-07-22 17:58:04.003338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f7d970 (9): Bad file descriptor 00:20:59.880 [2024-07-22 17:58:04.004340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:59.880 [2024-07-22 17:58:04.004346] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:59.880 [2024-07-22 17:58:04.004356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:59.880 request: 00:20:59.880 { 00:20:59.880 "name": "TLSTEST", 00:20:59.880 "trtype": "tcp", 00:20:59.880 "traddr": "10.0.0.2", 00:20:59.880 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.880 "adrfam": "ipv4", 00:20:59.880 "trsvcid": "4420", 00:20:59.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.880 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:59.880 "method": "bdev_nvme_attach_controller", 00:20:59.880 "req_id": 1 00:20:59.880 } 00:20:59.880 Got JSON-RPC error response 00:20:59.880 response: 00:20:59.880 { 00:20:59.880 "code": -32602, 00:20:59.880 "message": "Invalid parameters" 00:20:59.880 } 00:20:59.880 17:58:04 -- target/tls.sh@36 -- # killprocess 1713809 00:20:59.880 17:58:04 -- common/autotest_common.sh@926 -- # '[' -z 1713809 ']' 00:20:59.880 17:58:04 -- common/autotest_common.sh@930 -- # kill -0 1713809 00:20:59.880 17:58:04 -- common/autotest_common.sh@931 -- # uname 00:20:59.880 17:58:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.880 17:58:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1713809 00:20:59.880 17:58:04 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:59.880 17:58:04 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:59.880 17:58:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1713809' 00:20:59.880 killing process with pid 1713809 00:20:59.880 17:58:04 -- common/autotest_common.sh@945 -- # kill 1713809 00:20:59.880 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.880 00:20:59.880 Latency(us) 00:20:59.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.880 =================================================================================================================== 00:20:59.880 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.880 17:58:04 -- common/autotest_common.sh@950 -- # wait 1713809 00:21:00.141 17:58:04 -- target/tls.sh@37 -- # return 1 00:21:00.141 17:58:04 -- common/autotest_common.sh@643 -- # es=1 00:21:00.141 17:58:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:00.141 17:58:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:00.141 17:58:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:00.141 17:58:04 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:00.141 17:58:04 -- common/autotest_common.sh@640 -- # local es=0 00:21:00.141 17:58:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:00.141 17:58:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:00.141 17:58:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.141 17:58:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:00.141 17:58:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.141 17:58:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:00.141 17:58:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.141 17:58:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:00.141 17:58:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:00.141 17:58:04 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:21:00.141 17:58:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.141 17:58:04 -- target/tls.sh@28 -- # bdevperf_pid=1714116 00:21:00.141 17:58:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.141 17:58:04 -- target/tls.sh@31 -- # waitforlisten 1714116 /var/tmp/bdevperf.sock 00:21:00.141 17:58:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.141 17:58:04 -- common/autotest_common.sh@819 -- # '[' -z 1714116 ']' 00:21:00.141 17:58:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.141 17:58:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.141 17:58:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.141 17:58:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.141 17:58:04 -- common/autotest_common.sh@10 -- # set +x 00:21:00.141 [2024-07-22 17:58:04.229151] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:00.141 [2024-07-22 17:58:04.229204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714116 ] 00:21:00.141 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.141 [2024-07-22 17:58:04.284391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.141 [2024-07-22 17:58:04.335432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.083 17:58:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:01.083 17:58:05 -- common/autotest_common.sh@852 -- # return 0 00:21:01.083 17:58:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:21:01.083 [2024-07-22 17:58:05.227586] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.083 [2024-07-22 17:58:05.235306] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:01.083 [2024-07-22 17:58:05.235329] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:01.083 [2024-07-22 17:58:05.235359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.083 [2024-07-22 17:58:05.235738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67b970 (107): Transport endpoint is not connected 00:21:01.083 [2024-07-22 17:58:05.236732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x67b970 (9): Bad file descriptor 00:21:01.083 [2024-07-22 17:58:05.237734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:01.083 [2024-07-22 17:58:05.237740] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.083 [2024-07-22 17:58:05.237746] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:01.083 request: 00:21:01.083 { 00:21:01.083 "name": "TLSTEST", 00:21:01.083 "trtype": "tcp", 00:21:01.083 "traddr": "10.0.0.2", 00:21:01.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.083 "adrfam": "ipv4", 00:21:01.083 "trsvcid": "4420", 00:21:01.083 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.083 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:21:01.083 "method": "bdev_nvme_attach_controller", 00:21:01.083 "req_id": 1 00:21:01.083 } 00:21:01.083 Got JSON-RPC error response 00:21:01.083 response: 00:21:01.083 { 00:21:01.083 "code": -32602, 00:21:01.083 "message": "Invalid parameters" 00:21:01.083 } 00:21:01.083 17:58:05 -- target/tls.sh@36 -- # killprocess 1714116 00:21:01.083 17:58:05 -- common/autotest_common.sh@926 -- # '[' -z 1714116 ']' 00:21:01.083 17:58:05 -- common/autotest_common.sh@930 -- # kill -0 1714116 00:21:01.083 17:58:05 -- common/autotest_common.sh@931 -- # uname 00:21:01.083 17:58:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:01.083 17:58:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1714116 00:21:01.083 17:58:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:01.083 17:58:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:01.083 17:58:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1714116' 00:21:01.083 killing process with pid 1714116 00:21:01.083 17:58:05 -- common/autotest_common.sh@945 -- # kill 1714116 00:21:01.083 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.083 00:21:01.083 Latency(us) 00:21:01.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.083 =================================================================================================================== 00:21:01.083 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.083 17:58:05 -- common/autotest_common.sh@950 -- # wait 1714116 00:21:01.344 17:58:05 -- target/tls.sh@37 -- # return 1 00:21:01.344 17:58:05 -- common/autotest_common.sh@643 -- # es=1 00:21:01.344 17:58:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:01.344 17:58:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:01.344 17:58:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:01.344 17:58:05 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:01.344 17:58:05 -- common/autotest_common.sh@640 -- # local es=0 00:21:01.344 17:58:05 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:01.344 17:58:05 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:01.344 17:58:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.344 17:58:05 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:01.344 17:58:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.344 17:58:05 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:01.344 17:58:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.344 17:58:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.344 17:58:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.344 17:58:05 -- target/tls.sh@23 -- # psk= 00:21:01.344 17:58:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.344 17:58:05 -- target/tls.sh@28 -- # bdevperf_pid=1714295 00:21:01.344 17:58:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.344 17:58:05 -- target/tls.sh@31 -- # waitforlisten 1714295 /var/tmp/bdevperf.sock 00:21:01.344 17:58:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.344 17:58:05 -- common/autotest_common.sh@819 -- # '[' -z 1714295 ']' 00:21:01.344 17:58:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.344 17:58:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.344 17:58:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.345 17:58:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.345 17:58:05 -- common/autotest_common.sh@10 -- # set +x 00:21:01.345 [2024-07-22 17:58:05.458423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:01.345 [2024-07-22 17:58:05.458477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714295 ] 00:21:01.345 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.345 [2024-07-22 17:58:05.513784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.345 [2024-07-22 17:58:05.564695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.286 17:58:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.286 17:58:06 -- common/autotest_common.sh@852 -- # return 0 00:21:02.286 17:58:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:02.286 [2024-07-22 17:58:06.398187] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.286 [2024-07-22 17:58:06.399974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11682a0 (9): Bad file descriptor 00:21:02.286 [2024-07-22 17:58:06.400972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:02.286 [2024-07-22 17:58:06.400979] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:02.286 [2024-07-22 17:58:06.400984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:02.286 request: 00:21:02.286 { 00:21:02.286 "name": "TLSTEST", 00:21:02.286 "trtype": "tcp", 00:21:02.286 "traddr": "10.0.0.2", 00:21:02.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.286 "adrfam": "ipv4", 00:21:02.286 "trsvcid": "4420", 00:21:02.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.286 "method": "bdev_nvme_attach_controller", 00:21:02.286 "req_id": 1 00:21:02.286 } 00:21:02.286 Got JSON-RPC error response 00:21:02.286 response: 00:21:02.286 { 00:21:02.286 "code": -32602, 00:21:02.286 "message": "Invalid parameters" 00:21:02.286 } 00:21:02.286 17:58:06 -- target/tls.sh@36 -- # killprocess 1714295 00:21:02.286 17:58:06 -- common/autotest_common.sh@926 -- # '[' -z 1714295 ']' 00:21:02.286 17:58:06 -- common/autotest_common.sh@930 -- # kill -0 1714295 00:21:02.286 17:58:06 -- common/autotest_common.sh@931 -- # uname 00:21:02.286 17:58:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.286 17:58:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1714295 00:21:02.286 17:58:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:02.286 17:58:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:02.286 17:58:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1714295' 00:21:02.286 killing process with pid 1714295 00:21:02.286 17:58:06 -- common/autotest_common.sh@945 -- # kill 1714295 00:21:02.286 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.286 00:21:02.286 Latency(us) 00:21:02.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.286 =================================================================================================================== 00:21:02.286 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.286 17:58:06 -- common/autotest_common.sh@950 -- # wait 1714295 00:21:02.547 17:58:06 -- target/tls.sh@37 -- # return 1 00:21:02.547 17:58:06 -- common/autotest_common.sh@643 -- # es=1 00:21:02.547 17:58:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:02.547 17:58:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:02.547 17:58:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:02.547 17:58:06 -- target/tls.sh@167 -- # killprocess 1709397 00:21:02.547 17:58:06 -- common/autotest_common.sh@926 -- # '[' -z 1709397 ']' 00:21:02.547 17:58:06 -- common/autotest_common.sh@930 -- # kill -0 1709397 00:21:02.547 17:58:06 -- common/autotest_common.sh@931 -- # uname 00:21:02.547 17:58:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:02.547 17:58:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1709397 00:21:02.547 17:58:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:02.547 17:58:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:02.547 17:58:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1709397' 00:21:02.547 killing process with pid 1709397 00:21:02.547 17:58:06 -- common/autotest_common.sh@945 -- # kill 1709397 00:21:02.547 17:58:06 -- common/autotest_common.sh@950 -- # wait 1709397 00:21:02.547 17:58:06 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:21:02.547 17:58:06 -- target/tls.sh@49 -- # local key hash crc 00:21:02.547 17:58:06 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:02.547 17:58:06 -- target/tls.sh@51 -- # hash=02 00:21:02.547 17:58:06 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:21:02.547 17:58:06 -- target/tls.sh@52 -- # gzip -1 -c 00:21:02.547 17:58:06 -- target/tls.sh@52 -- # tail -c8 00:21:02.547 17:58:06 -- target/tls.sh@52 -- # head -c 4 00:21:02.547 17:58:06 -- target/tls.sh@52 -- # crc='�e�'\''' 00:21:02.547 17:58:06 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:21:02.547 17:58:06 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:21:02.547 17:58:06 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:02.547 17:58:06 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:02.547 17:58:06 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:02.547 17:58:06 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:02.547 17:58:06 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:02.547 17:58:06 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:21:02.547 17:58:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:02.547 17:58:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:02.547 17:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.547 17:58:06 -- nvmf/common.sh@469 -- # nvmfpid=1714484 00:21:02.547 17:58:06 -- nvmf/common.sh@470 -- # waitforlisten 1714484 00:21:02.547 17:58:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.547 17:58:06 -- common/autotest_common.sh@819 -- # '[' -z 1714484 ']' 00:21:02.547 17:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.547 17:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.547 17:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.547 17:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.547 17:58:06 -- common/autotest_common.sh@10 -- # set +x 00:21:02.808 [2024-07-22 17:58:06.837249] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:02.808 [2024-07-22 17:58:06.837306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.808 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.808 [2024-07-22 17:58:06.905295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.808 [2024-07-22 17:58:06.968068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:02.808 [2024-07-22 17:58:06.968184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.808 [2024-07-22 17:58:06.968192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.808 [2024-07-22 17:58:06.968199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.808 [2024-07-22 17:58:06.968216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.749 17:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.749 17:58:07 -- common/autotest_common.sh@852 -- # return 0 00:21:03.749 17:58:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:03.749 17:58:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:03.749 17:58:07 -- common/autotest_common.sh@10 -- # set +x 00:21:03.749 17:58:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.749 17:58:07 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:03.749 17:58:07 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:03.749 17:58:07 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.749 [2024-07-22 17:58:07.873178] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.749 17:58:07 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:04.009 17:58:08 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:04.009 [2024-07-22 17:58:08.230093] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.009 [2024-07-22 17:58:08.230275] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.009 17:58:08 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:04.268 malloc0 00:21:04.268 17:58:08 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.529 17:58:08 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:04.529 17:58:08 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:04.529 17:58:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.529 17:58:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.529 17:58:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:04.529 17:58:08 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:04.529 17:58:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.529 17:58:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.529 17:58:08 -- target/tls.sh@28 -- # bdevperf_pid=1714825 00:21:04.529 17:58:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.529 17:58:08 -- target/tls.sh@31 -- # waitforlisten 1714825 /var/tmp/bdevperf.sock 00:21:04.529 17:58:08 -- common/autotest_common.sh@819 -- # '[' -z 1714825 ']' 00:21:04.529 17:58:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.529 17:58:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.529 17:58:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.529 17:58:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.529 17:58:08 -- common/autotest_common.sh@10 -- # set +x 00:21:04.529 [2024-07-22 17:58:08.794809] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:04.529 [2024-07-22 17:58:08.794859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714825 ] 00:21:04.791 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.791 [2024-07-22 17:58:08.849798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.791 [2024-07-22 17:58:08.900640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.731 17:58:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:05.731 17:58:09 -- common/autotest_common.sh@852 -- # return 0 00:21:05.731 17:58:09 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:05.731 [2024-07-22 17:58:09.809377] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.731 TLSTESTn1 00:21:05.731 17:58:09 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:05.731 Running I/O for 10 seconds... 00:21:17.962 00:21:17.962 Latency(us) 00:21:17.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.962 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:17.962 Verification LBA range: start 0x0 length 0x2000 00:21:17.962 TLSTESTn1 : 10.03 5054.80 19.75 0.00 0.00 25276.47 4839.58 62511.26 00:21:17.962 =================================================================================================================== 00:21:17.962 Total : 5054.80 19.75 0.00 0.00 25276.47 4839.58 62511.26 00:21:17.962 0 00:21:17.962 17:58:20 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.962 17:58:20 -- target/tls.sh@45 -- # killprocess 1714825 00:21:17.962 17:58:20 -- common/autotest_common.sh@926 -- # '[' -z 1714825 ']' 00:21:17.962 17:58:20 -- common/autotest_common.sh@930 -- # kill -0 1714825 00:21:17.962 17:58:20 -- common/autotest_common.sh@931 -- # uname 00:21:17.962 17:58:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:17.962 17:58:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1714825 00:21:17.962 17:58:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:17.962 17:58:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:17.962 17:58:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1714825' 00:21:17.962 killing process with pid 1714825 00:21:17.962 17:58:20 -- common/autotest_common.sh@945 -- # kill 1714825 00:21:17.962 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.962 00:21:17.962 Latency(us) 00:21:17.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.962 =================================================================================================================== 00:21:17.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.962 17:58:20 -- common/autotest_common.sh@950 -- # wait 1714825 00:21:17.962 17:58:20 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:17.962 17:58:20 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:17.962 17:58:20 -- common/autotest_common.sh@640 -- # local es=0 00:21:17.962 17:58:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:17.962 17:58:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:21:17.962 17:58:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:17.962 17:58:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:21:17.962 17:58:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:17.962 17:58:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:17.962 17:58:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.962 17:58:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.962 17:58:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:17.962 17:58:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:21:17.962 17:58:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.962 17:58:20 -- target/tls.sh@28 -- # bdevperf_pid=1716819 00:21:17.962 17:58:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.962 17:58:20 -- target/tls.sh@31 -- # waitforlisten 1716819 /var/tmp/bdevperf.sock 00:21:17.962 17:58:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.962 17:58:20 -- common/autotest_common.sh@819 -- # '[' -z 1716819 ']' 00:21:17.962 17:58:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.963 17:58:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:17.963 17:58:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.963 17:58:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:17.963 17:58:20 -- common/autotest_common.sh@10 -- # set +x 00:21:17.963 [2024-07-22 17:58:20.262047] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:17.963 [2024-07-22 17:58:20.262098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716819 ] 00:21:17.963 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.963 [2024-07-22 17:58:20.316877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.963 [2024-07-22 17:58:20.366947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.963 17:58:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:17.963 17:58:21 -- common/autotest_common.sh@852 -- # return 0 00:21:17.963 17:58:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:17.963 [2024-07-22 17:58:21.252076] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.963 [2024-07-22 17:58:21.252113] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:17.963 request: 00:21:17.963 { 00:21:17.963 "name": "TLSTEST", 00:21:17.963 "trtype": "tcp", 00:21:17.963 "traddr": "10.0.0.2", 00:21:17.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.963 "adrfam": "ipv4", 00:21:17.963 "trsvcid": "4420", 00:21:17.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.963 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:17.963 "method": "bdev_nvme_attach_controller", 00:21:17.963 "req_id": 1 00:21:17.963 } 00:21:17.963 Got JSON-RPC error response 00:21:17.963 response: 00:21:17.963 { 00:21:17.963 "code": -22, 00:21:17.963 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:17.963 } 00:21:17.963 17:58:21 -- target/tls.sh@36 -- # killprocess 1716819 00:21:17.963 17:58:21 -- common/autotest_common.sh@926 -- # '[' -z 1716819 ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@930 -- # kill -0 1716819 00:21:17.963 17:58:21 -- common/autotest_common.sh@931 -- # uname 00:21:17.963 17:58:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1716819 00:21:17.963 17:58:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:17.963 17:58:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1716819' 00:21:17.963 killing process with pid 1716819 00:21:17.963 17:58:21 -- common/autotest_common.sh@945 -- # kill 1716819 00:21:17.963 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.963 00:21:17.963 Latency(us) 00:21:17.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.963 =================================================================================================================== 00:21:17.963 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.963 17:58:21 -- common/autotest_common.sh@950 -- # wait 1716819 00:21:17.963 17:58:21 -- target/tls.sh@37 -- # return 1 00:21:17.963 17:58:21 -- common/autotest_common.sh@643 -- # es=1 00:21:17.963 17:58:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:17.963 17:58:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:17.963 17:58:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:17.963 17:58:21 -- target/tls.sh@183 -- # killprocess 1714484 00:21:17.963 17:58:21 -- common/autotest_common.sh@926 -- # '[' -z 1714484 ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@930 -- # kill -0 1714484 00:21:17.963 17:58:21 -- common/autotest_common.sh@931 -- # uname 00:21:17.963 17:58:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1714484 00:21:17.963 17:58:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:17.963 17:58:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1714484' 00:21:17.963 killing process with pid 1714484 00:21:17.963 17:58:21 -- common/autotest_common.sh@945 -- # kill 1714484 00:21:17.963 17:58:21 -- common/autotest_common.sh@950 -- # wait 1714484 00:21:17.963 17:58:21 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:17.963 17:58:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:17.963 17:58:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:17.963 17:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:17.963 17:58:21 -- nvmf/common.sh@469 -- # nvmfpid=1716998 00:21:17.963 17:58:21 -- nvmf/common.sh@470 -- # waitforlisten 1716998 00:21:17.963 17:58:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.963 17:58:21 -- common/autotest_common.sh@819 -- # '[' -z 1716998 ']' 00:21:17.963 17:58:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.963 17:58:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:17.963 17:58:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.963 17:58:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:17.963 17:58:21 -- common/autotest_common.sh@10 -- # set +x 00:21:17.963 [2024-07-22 17:58:21.656013] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:17.963 [2024-07-22 17:58:21.656066] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.963 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.963 [2024-07-22 17:58:21.723698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.963 [2024-07-22 17:58:21.782001] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:17.963 [2024-07-22 17:58:21.782120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.963 [2024-07-22 17:58:21.782128] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.963 [2024-07-22 17:58:21.782134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.963 [2024-07-22 17:58:21.782152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.224 17:58:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.224 17:58:22 -- common/autotest_common.sh@852 -- # return 0 00:21:18.224 17:58:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:18.224 17:58:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:18.224 17:58:22 -- common/autotest_common.sh@10 -- # set +x 00:21:18.224 17:58:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.224 17:58:22 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.224 17:58:22 -- common/autotest_common.sh@640 -- # local es=0 00:21:18.224 17:58:22 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.224 17:58:22 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:21:18.224 17:58:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.224 17:58:22 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:21:18.224 17:58:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:18.224 17:58:22 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.224 17:58:22 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:18.224 17:58:22 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.484 [2024-07-22 17:58:22.647053] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.484 17:58:22 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.744 17:58:22 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.744 [2024-07-22 17:58:22.999950] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.744 [2024-07-22 17:58:23.000135] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.744 17:58:23 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:19.004 malloc0 00:21:19.004 17:58:23 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:19.263 17:58:23 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:19.524 [2024-07-22 17:58:23.539805] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:19.524 [2024-07-22 17:58:23.539830] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:19.524 [2024-07-22 17:58:23.539847] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:21:19.524 request: 00:21:19.524 { 00:21:19.524 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.524 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.524 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:19.524 "method": "nvmf_subsystem_add_host", 00:21:19.524 "req_id": 1 00:21:19.524 } 00:21:19.524 Got JSON-RPC error response 00:21:19.524 response: 00:21:19.524 { 00:21:19.524 "code": -32603, 00:21:19.524 "message": "Internal error" 00:21:19.524 } 00:21:19.524 17:58:23 -- common/autotest_common.sh@643 -- # es=1 00:21:19.524 17:58:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:19.524 17:58:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:19.524 17:58:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:19.524 17:58:23 -- target/tls.sh@189 -- # killprocess 1716998 00:21:19.524 17:58:23 -- common/autotest_common.sh@926 -- # '[' -z 1716998 ']' 00:21:19.524 17:58:23 -- common/autotest_common.sh@930 -- # kill -0 1716998 00:21:19.524 17:58:23 -- common/autotest_common.sh@931 -- # uname 00:21:19.524 17:58:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:19.524 17:58:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1716998 00:21:19.524 17:58:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:19.524 17:58:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:19.524 17:58:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1716998' 00:21:19.524 killing process with pid 1716998 00:21:19.524 17:58:23 -- common/autotest_common.sh@945 -- # kill 1716998 00:21:19.524 17:58:23 -- common/autotest_common.sh@950 -- # wait 1716998 00:21:19.524 17:58:23 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:19.524 17:58:23 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:21:19.524 17:58:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:19.524 17:58:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:19.524 17:58:23 -- common/autotest_common.sh@10 -- # set +x 00:21:19.524 17:58:23 -- nvmf/common.sh@469 -- # nvmfpid=1717413 00:21:19.524 17:58:23 -- nvmf/common.sh@470 -- # waitforlisten 1717413 00:21:19.524 17:58:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.524 17:58:23 -- common/autotest_common.sh@819 -- # '[' -z 1717413 ']' 00:21:19.524 17:58:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.524 17:58:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:19.524 17:58:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.524 17:58:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:19.524 17:58:23 -- common/autotest_common.sh@10 -- # set +x 00:21:19.524 [2024-07-22 17:58:23.796714] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:19.524 [2024-07-22 17:58:23.796766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.785 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.785 [2024-07-22 17:58:23.863892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.785 [2024-07-22 17:58:23.925020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:19.785 [2024-07-22 17:58:23.925133] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.785 [2024-07-22 17:58:23.925140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.785 [2024-07-22 17:58:23.925146] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.785 [2024-07-22 17:58:23.925164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.355 17:58:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:20.355 17:58:24 -- common/autotest_common.sh@852 -- # return 0 00:21:20.355 17:58:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:20.355 17:58:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:20.355 17:58:24 -- common/autotest_common.sh@10 -- # set +x 00:21:20.617 17:58:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.617 17:58:24 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:20.617 17:58:24 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:20.617 17:58:24 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.617 [2024-07-22 17:58:24.834286] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.617 17:58:24 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:20.878 17:58:25 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:21.138 [2024-07-22 17:58:25.191200] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.138 [2024-07-22 17:58:25.191384] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.138 17:58:25 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.138 malloc0 00:21:21.138 17:58:25 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.398 17:58:25 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:21.659 17:58:25 -- target/tls.sh@197 -- # bdevperf_pid=1717831 00:21:21.659 17:58:25 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.659 17:58:25 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.659 17:58:25 -- target/tls.sh@200 -- # waitforlisten 1717831 /var/tmp/bdevperf.sock 00:21:21.659 17:58:25 -- common/autotest_common.sh@819 -- # '[' -z 1717831 ']' 00:21:21.659 17:58:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.659 17:58:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:21.659 17:58:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.659 17:58:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:21.659 17:58:25 -- common/autotest_common.sh@10 -- # set +x 00:21:21.659 [2024-07-22 17:58:25.784987] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:21.659 [2024-07-22 17:58:25.785039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717831 ] 00:21:21.659 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.659 [2024-07-22 17:58:25.839853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.659 [2024-07-22 17:58:25.891182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.600 17:58:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:22.600 17:58:26 -- common/autotest_common.sh@852 -- # return 0 00:21:22.600 17:58:26 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:22.600 [2024-07-22 17:58:26.788542] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.600 TLSTESTn1 00:21:22.600 17:58:26 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:22.862 17:58:27 -- target/tls.sh@205 -- # tgtconf='{ 00:21:22.862 "subsystems": [ 00:21:22.862 { 00:21:22.862 "subsystem": "iobuf", 00:21:22.862 "config": [ 00:21:22.862 { 00:21:22.862 "method": "iobuf_set_options", 00:21:22.862 "params": { 00:21:22.862 "small_pool_count": 8192, 00:21:22.862 "large_pool_count": 1024, 00:21:22.862 "small_bufsize": 8192, 00:21:22.862 "large_bufsize": 135168 00:21:22.862 } 00:21:22.862 } 00:21:22.862 ] 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "subsystem": "sock", 00:21:22.862 "config": [ 00:21:22.862 { 00:21:22.862 "method": "sock_impl_set_options", 00:21:22.862 "params": { 00:21:22.862 "impl_name": "posix", 00:21:22.862 "recv_buf_size": 2097152, 00:21:22.862 "send_buf_size": 2097152, 00:21:22.862 "enable_recv_pipe": true, 00:21:22.862 "enable_quickack": false, 00:21:22.862 "enable_placement_id": 0, 00:21:22.862 "enable_zerocopy_send_server": true, 00:21:22.862 "enable_zerocopy_send_client": false, 00:21:22.862 "zerocopy_threshold": 0, 00:21:22.862 "tls_version": 0, 00:21:22.862 "enable_ktls": false 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "sock_impl_set_options", 00:21:22.862 "params": { 00:21:22.862 "impl_name": "ssl", 00:21:22.862 "recv_buf_size": 4096, 00:21:22.862 "send_buf_size": 4096, 00:21:22.862 "enable_recv_pipe": true, 00:21:22.862 "enable_quickack": false, 00:21:22.862 "enable_placement_id": 0, 00:21:22.862 "enable_zerocopy_send_server": true, 00:21:22.862 "enable_zerocopy_send_client": false, 00:21:22.862 "zerocopy_threshold": 0, 00:21:22.862 "tls_version": 0, 00:21:22.862 "enable_ktls": false 00:21:22.862 } 00:21:22.862 } 00:21:22.862 ] 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "subsystem": "vmd", 00:21:22.862 "config": [] 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "subsystem": "accel", 00:21:22.862 "config": [ 00:21:22.862 { 00:21:22.862 "method": "accel_set_options", 00:21:22.862 "params": { 00:21:22.862 "small_cache_size": 128, 00:21:22.862 "large_cache_size": 16, 00:21:22.862 "task_count": 2048, 00:21:22.862 "sequence_count": 2048, 00:21:22.862 "buf_count": 2048 00:21:22.862 } 00:21:22.862 } 00:21:22.862 ] 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "subsystem": "bdev", 00:21:22.862 "config": [ 00:21:22.862 { 00:21:22.862 "method": "bdev_set_options", 00:21:22.862 "params": { 00:21:22.862 "bdev_io_pool_size": 65535, 00:21:22.862 "bdev_io_cache_size": 256, 00:21:22.862 "bdev_auto_examine": true, 00:21:22.862 "iobuf_small_cache_size": 128, 00:21:22.862 "iobuf_large_cache_size": 16 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "bdev_raid_set_options", 00:21:22.862 "params": { 00:21:22.862 "process_window_size_kb": 1024 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "bdev_iscsi_set_options", 00:21:22.862 "params": { 00:21:22.862 "timeout_sec": 30 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "bdev_nvme_set_options", 00:21:22.862 "params": { 00:21:22.862 "action_on_timeout": "none", 00:21:22.862 "timeout_us": 0, 00:21:22.862 "timeout_admin_us": 0, 00:21:22.862 "keep_alive_timeout_ms": 10000, 00:21:22.862 "transport_retry_count": 4, 00:21:22.862 "arbitration_burst": 0, 00:21:22.862 "low_priority_weight": 0, 00:21:22.862 "medium_priority_weight": 0, 00:21:22.862 "high_priority_weight": 0, 00:21:22.862 "nvme_adminq_poll_period_us": 10000, 00:21:22.862 "nvme_ioq_poll_period_us": 0, 00:21:22.862 "io_queue_requests": 0, 00:21:22.862 "delay_cmd_submit": true, 00:21:22.862 "bdev_retry_count": 3, 00:21:22.862 "transport_ack_timeout": 0, 00:21:22.862 "ctrlr_loss_timeout_sec": 0, 00:21:22.862 "reconnect_delay_sec": 0, 00:21:22.862 "fast_io_fail_timeout_sec": 0, 00:21:22.862 "generate_uuids": false, 00:21:22.862 "transport_tos": 0, 00:21:22.862 "io_path_stat": false, 00:21:22.862 "allow_accel_sequence": false 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "bdev_nvme_set_hotplug", 00:21:22.862 "params": { 00:21:22.862 "period_us": 100000, 00:21:22.862 "enable": false 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "bdev_malloc_create", 00:21:22.862 "params": { 00:21:22.862 "name": "malloc0", 00:21:22.862 "num_blocks": 8192, 00:21:22.862 "block_size": 4096, 00:21:22.862 "physical_block_size": 4096, 00:21:22.862 "uuid": "6dfc0c99-1bb9-4730-bd43-8b5c3d8cacc8", 00:21:22.862 "optimal_io_boundary": 0 00:21:22.862 } 00:21:22.862 }, 00:21:22.862 { 00:21:22.862 "method": "bdev_wait_for_examine" 00:21:22.863 } 00:21:22.863 ] 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "subsystem": "nbd", 00:21:22.863 "config": [] 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "subsystem": "scheduler", 00:21:22.863 "config": [ 00:21:22.863 { 00:21:22.863 "method": "framework_set_scheduler", 00:21:22.863 "params": { 00:21:22.863 "name": "static" 00:21:22.863 } 00:21:22.863 } 00:21:22.863 ] 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "subsystem": "nvmf", 00:21:22.863 "config": [ 00:21:22.863 { 00:21:22.863 "method": "nvmf_set_config", 00:21:22.863 "params": { 00:21:22.863 "discovery_filter": "match_any", 00:21:22.863 "admin_cmd_passthru": { 00:21:22.863 "identify_ctrlr": false 00:21:22.863 } 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_set_max_subsystems", 00:21:22.863 "params": { 00:21:22.863 "max_subsystems": 1024 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_set_crdt", 00:21:22.863 "params": { 00:21:22.863 "crdt1": 0, 00:21:22.863 "crdt2": 0, 00:21:22.863 "crdt3": 0 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_create_transport", 00:21:22.863 "params": { 00:21:22.863 "trtype": "TCP", 00:21:22.863 "max_queue_depth": 128, 00:21:22.863 "max_io_qpairs_per_ctrlr": 127, 00:21:22.863 "in_capsule_data_size": 4096, 00:21:22.863 "max_io_size": 131072, 00:21:22.863 "io_unit_size": 131072, 00:21:22.863 "max_aq_depth": 128, 00:21:22.863 "num_shared_buffers": 511, 00:21:22.863 "buf_cache_size": 4294967295, 00:21:22.863 "dif_insert_or_strip": false, 00:21:22.863 "zcopy": false, 00:21:22.863 "c2h_success": false, 00:21:22.863 "sock_priority": 0, 00:21:22.863 "abort_timeout_sec": 1 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_create_subsystem", 00:21:22.863 "params": { 00:21:22.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.863 "allow_any_host": false, 00:21:22.863 "serial_number": "SPDK00000000000001", 00:21:22.863 "model_number": "SPDK bdev Controller", 00:21:22.863 "max_namespaces": 10, 00:21:22.863 "min_cntlid": 1, 00:21:22.863 "max_cntlid": 65519, 00:21:22.863 "ana_reporting": false 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_subsystem_add_host", 00:21:22.863 "params": { 00:21:22.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.863 "host": "nqn.2016-06.io.spdk:host1", 00:21:22.863 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_subsystem_add_ns", 00:21:22.863 "params": { 00:21:22.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.863 "namespace": { 00:21:22.863 "nsid": 1, 00:21:22.863 "bdev_name": "malloc0", 00:21:22.863 "nguid": "6DFC0C991BB94730BD438B5C3D8CACC8", 00:21:22.863 "uuid": "6dfc0c99-1bb9-4730-bd43-8b5c3d8cacc8" 00:21:22.863 } 00:21:22.863 } 00:21:22.863 }, 00:21:22.863 { 00:21:22.863 "method": "nvmf_subsystem_add_listener", 00:21:22.863 "params": { 00:21:22.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.863 "listen_address": { 00:21:22.863 "trtype": "TCP", 00:21:22.863 "adrfam": "IPv4", 00:21:22.863 "traddr": "10.0.0.2", 00:21:22.863 "trsvcid": "4420" 00:21:22.863 }, 00:21:22.863 "secure_channel": true 00:21:22.863 } 00:21:22.863 } 00:21:22.863 ] 00:21:22.863 } 00:21:22.863 ] 00:21:22.863 }' 00:21:22.863 17:58:27 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:23.124 17:58:27 -- target/tls.sh@206 -- # bdevperfconf='{ 00:21:23.124 "subsystems": [ 00:21:23.124 { 00:21:23.124 "subsystem": "iobuf", 00:21:23.124 "config": [ 00:21:23.124 { 00:21:23.124 "method": "iobuf_set_options", 00:21:23.124 "params": { 00:21:23.124 "small_pool_count": 8192, 00:21:23.124 "large_pool_count": 1024, 00:21:23.124 "small_bufsize": 8192, 00:21:23.124 "large_bufsize": 135168 00:21:23.124 } 00:21:23.124 } 00:21:23.125 ] 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "subsystem": "sock", 00:21:23.125 "config": [ 00:21:23.125 { 00:21:23.125 "method": "sock_impl_set_options", 00:21:23.125 "params": { 00:21:23.125 "impl_name": "posix", 00:21:23.125 "recv_buf_size": 2097152, 00:21:23.125 "send_buf_size": 2097152, 00:21:23.125 "enable_recv_pipe": true, 00:21:23.125 "enable_quickack": false, 00:21:23.125 "enable_placement_id": 0, 00:21:23.125 "enable_zerocopy_send_server": true, 00:21:23.125 "enable_zerocopy_send_client": false, 00:21:23.125 "zerocopy_threshold": 0, 00:21:23.125 "tls_version": 0, 00:21:23.125 "enable_ktls": false 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "sock_impl_set_options", 00:21:23.125 "params": { 00:21:23.125 "impl_name": "ssl", 00:21:23.125 "recv_buf_size": 4096, 00:21:23.125 "send_buf_size": 4096, 00:21:23.125 "enable_recv_pipe": true, 00:21:23.125 "enable_quickack": false, 00:21:23.125 "enable_placement_id": 0, 00:21:23.125 "enable_zerocopy_send_server": true, 00:21:23.125 "enable_zerocopy_send_client": false, 00:21:23.125 "zerocopy_threshold": 0, 00:21:23.125 "tls_version": 0, 00:21:23.125 "enable_ktls": false 00:21:23.125 } 00:21:23.125 } 00:21:23.125 ] 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "subsystem": "vmd", 00:21:23.125 "config": [] 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "subsystem": "accel", 00:21:23.125 "config": [ 00:21:23.125 { 00:21:23.125 "method": "accel_set_options", 00:21:23.125 "params": { 00:21:23.125 "small_cache_size": 128, 00:21:23.125 "large_cache_size": 16, 00:21:23.125 "task_count": 2048, 00:21:23.125 "sequence_count": 2048, 00:21:23.125 "buf_count": 2048 00:21:23.125 } 00:21:23.125 } 00:21:23.125 ] 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "subsystem": "bdev", 00:21:23.125 "config": [ 00:21:23.125 { 00:21:23.125 "method": "bdev_set_options", 00:21:23.125 "params": { 00:21:23.125 "bdev_io_pool_size": 65535, 00:21:23.125 "bdev_io_cache_size": 256, 00:21:23.125 "bdev_auto_examine": true, 00:21:23.125 "iobuf_small_cache_size": 128, 00:21:23.125 "iobuf_large_cache_size": 16 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "bdev_raid_set_options", 00:21:23.125 "params": { 00:21:23.125 "process_window_size_kb": 1024 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "bdev_iscsi_set_options", 00:21:23.125 "params": { 00:21:23.125 "timeout_sec": 30 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "bdev_nvme_set_options", 00:21:23.125 "params": { 00:21:23.125 "action_on_timeout": "none", 00:21:23.125 "timeout_us": 0, 00:21:23.125 "timeout_admin_us": 0, 00:21:23.125 "keep_alive_timeout_ms": 10000, 00:21:23.125 "transport_retry_count": 4, 00:21:23.125 "arbitration_burst": 0, 00:21:23.125 "low_priority_weight": 0, 00:21:23.125 "medium_priority_weight": 0, 00:21:23.125 "high_priority_weight": 0, 00:21:23.125 "nvme_adminq_poll_period_us": 10000, 00:21:23.125 "nvme_ioq_poll_period_us": 0, 00:21:23.125 "io_queue_requests": 512, 00:21:23.125 "delay_cmd_submit": true, 00:21:23.125 "bdev_retry_count": 3, 00:21:23.125 "transport_ack_timeout": 0, 00:21:23.125 "ctrlr_loss_timeout_sec": 0, 00:21:23.125 "reconnect_delay_sec": 0, 00:21:23.125 "fast_io_fail_timeout_sec": 0, 00:21:23.125 "generate_uuids": false, 00:21:23.125 "transport_tos": 0, 00:21:23.125 "io_path_stat": false, 00:21:23.125 "allow_accel_sequence": false 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "bdev_nvme_attach_controller", 00:21:23.125 "params": { 00:21:23.125 "name": "TLSTEST", 00:21:23.125 "trtype": "TCP", 00:21:23.125 "adrfam": "IPv4", 00:21:23.125 "traddr": "10.0.0.2", 00:21:23.125 "trsvcid": "4420", 00:21:23.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.125 "prchk_reftag": false, 00:21:23.125 "prchk_guard": false, 00:21:23.125 "ctrlr_loss_timeout_sec": 0, 00:21:23.125 "reconnect_delay_sec": 0, 00:21:23.125 "fast_io_fail_timeout_sec": 0, 00:21:23.125 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:23.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.125 "hdgst": false, 00:21:23.125 "ddgst": false 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "bdev_nvme_set_hotplug", 00:21:23.125 "params": { 00:21:23.125 "period_us": 100000, 00:21:23.125 "enable": false 00:21:23.125 } 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "method": "bdev_wait_for_examine" 00:21:23.125 } 00:21:23.125 ] 00:21:23.125 }, 00:21:23.125 { 00:21:23.125 "subsystem": "nbd", 00:21:23.125 "config": [] 00:21:23.125 } 00:21:23.125 ] 00:21:23.125 }' 00:21:23.125 17:58:27 -- target/tls.sh@208 -- # killprocess 1717831 00:21:23.125 17:58:27 -- common/autotest_common.sh@926 -- # '[' -z 1717831 ']' 00:21:23.125 17:58:27 -- common/autotest_common.sh@930 -- # kill -0 1717831 00:21:23.125 17:58:27 -- common/autotest_common.sh@931 -- # uname 00:21:23.125 17:58:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:23.125 17:58:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1717831 00:21:23.447 17:58:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:23.447 17:58:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:23.447 17:58:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1717831' 00:21:23.447 killing process with pid 1717831 00:21:23.447 17:58:27 -- common/autotest_common.sh@945 -- # kill 1717831 00:21:23.447 Received shutdown signal, test time was about 10.000000 seconds 00:21:23.447 00:21:23.447 Latency(us) 00:21:23.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.447 =================================================================================================================== 00:21:23.447 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:23.447 17:58:27 -- common/autotest_common.sh@950 -- # wait 1717831 00:21:23.447 17:58:27 -- target/tls.sh@209 -- # killprocess 1717413 00:21:23.447 17:58:27 -- common/autotest_common.sh@926 -- # '[' -z 1717413 ']' 00:21:23.447 17:58:27 -- common/autotest_common.sh@930 -- # kill -0 1717413 00:21:23.447 17:58:27 -- common/autotest_common.sh@931 -- # uname 00:21:23.447 17:58:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:23.447 17:58:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1717413 00:21:23.447 17:58:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:23.447 17:58:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:23.447 17:58:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1717413' 00:21:23.447 killing process with pid 1717413 00:21:23.447 17:58:27 -- common/autotest_common.sh@945 -- # kill 1717413 00:21:23.447 17:58:27 -- common/autotest_common.sh@950 -- # wait 1717413 00:21:23.710 17:58:27 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:23.710 17:58:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:23.710 17:58:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:23.710 17:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:23.710 17:58:27 -- target/tls.sh@212 -- # echo '{ 00:21:23.710 "subsystems": [ 00:21:23.710 { 00:21:23.710 "subsystem": "iobuf", 00:21:23.710 "config": [ 00:21:23.710 { 00:21:23.710 "method": "iobuf_set_options", 00:21:23.710 "params": { 00:21:23.710 "small_pool_count": 8192, 00:21:23.710 "large_pool_count": 1024, 00:21:23.710 "small_bufsize": 8192, 00:21:23.710 "large_bufsize": 135168 00:21:23.710 } 00:21:23.710 } 00:21:23.710 ] 00:21:23.710 }, 00:21:23.710 { 00:21:23.710 "subsystem": "sock", 00:21:23.710 "config": [ 00:21:23.710 { 00:21:23.710 "method": "sock_impl_set_options", 00:21:23.710 "params": { 00:21:23.710 "impl_name": "posix", 00:21:23.710 "recv_buf_size": 2097152, 00:21:23.710 "send_buf_size": 2097152, 00:21:23.710 "enable_recv_pipe": true, 00:21:23.710 "enable_quickack": false, 00:21:23.710 "enable_placement_id": 0, 00:21:23.710 "enable_zerocopy_send_server": true, 00:21:23.710 "enable_zerocopy_send_client": false, 00:21:23.710 "zerocopy_threshold": 0, 00:21:23.710 "tls_version": 0, 00:21:23.710 "enable_ktls": false 00:21:23.710 } 00:21:23.710 }, 00:21:23.710 { 00:21:23.710 "method": "sock_impl_set_options", 00:21:23.710 "params": { 00:21:23.710 "impl_name": "ssl", 00:21:23.710 "recv_buf_size": 4096, 00:21:23.710 "send_buf_size": 4096, 00:21:23.710 "enable_recv_pipe": true, 00:21:23.710 "enable_quickack": false, 00:21:23.710 "enable_placement_id": 0, 00:21:23.710 "enable_zerocopy_send_server": true, 00:21:23.710 "enable_zerocopy_send_client": false, 00:21:23.710 "zerocopy_threshold": 0, 00:21:23.710 "tls_version": 0, 00:21:23.710 "enable_ktls": false 00:21:23.710 } 00:21:23.710 } 00:21:23.710 ] 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "subsystem": "vmd", 00:21:23.711 "config": [] 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "subsystem": "accel", 00:21:23.711 "config": [ 00:21:23.711 { 00:21:23.711 "method": "accel_set_options", 00:21:23.711 "params": { 00:21:23.711 "small_cache_size": 128, 00:21:23.711 "large_cache_size": 16, 00:21:23.711 "task_count": 2048, 00:21:23.711 "sequence_count": 2048, 00:21:23.711 "buf_count": 2048 00:21:23.711 } 00:21:23.711 } 00:21:23.711 ] 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "subsystem": "bdev", 00:21:23.711 "config": [ 00:21:23.711 { 00:21:23.711 "method": "bdev_set_options", 00:21:23.711 "params": { 00:21:23.711 "bdev_io_pool_size": 65535, 00:21:23.711 "bdev_io_cache_size": 256, 00:21:23.711 "bdev_auto_examine": true, 00:21:23.711 "iobuf_small_cache_size": 128, 00:21:23.711 "iobuf_large_cache_size": 16 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "bdev_raid_set_options", 00:21:23.711 "params": { 00:21:23.711 "process_window_size_kb": 1024 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "bdev_iscsi_set_options", 00:21:23.711 "params": { 00:21:23.711 "timeout_sec": 30 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "bdev_nvme_set_options", 00:21:23.711 "params": { 00:21:23.711 "action_on_timeout": "none", 00:21:23.711 "timeout_us": 0, 00:21:23.711 "timeout_admin_us": 0, 00:21:23.711 "keep_alive_timeout_ms": 10000, 00:21:23.711 "transport_retry_count": 4, 00:21:23.711 "arbitration_burst": 0, 00:21:23.711 "low_priority_weight": 0, 00:21:23.711 "medium_priority_weight": 0, 00:21:23.711 "high_priority_weight": 0, 00:21:23.711 "nvme_adminq_poll_period_us": 10000, 00:21:23.711 "nvme_ioq_poll_period_us": 0, 00:21:23.711 "io_queue_requests": 0, 00:21:23.711 "delay_cmd_submit": true, 00:21:23.711 "bdev_retry_count": 3, 00:21:23.711 "transport_ack_timeout": 0, 00:21:23.711 "ctrlr_loss_timeout_sec": 0, 00:21:23.711 "reconnect_delay_sec": 0, 00:21:23.711 "fast_io_fail_timeout_sec": 0, 00:21:23.711 "generate_uuids": false, 00:21:23.711 "transport_tos": 0, 00:21:23.711 "io_path_stat": false, 00:21:23.711 "allow_accel_sequence": false 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "bdev_nvme_set_hotplug", 00:21:23.711 "params": { 00:21:23.711 "period_us": 100000, 00:21:23.711 "enable": false 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "bdev_malloc_create", 00:21:23.711 "params": { 00:21:23.711 "name": "malloc0", 00:21:23.711 "num_blocks": 8192, 00:21:23.711 "block_size": 4096, 00:21:23.711 "physical_block_size": 4096, 00:21:23.711 "uuid": "6dfc0c99-1bb9-4730-bd43-8b5c3d8cacc8", 00:21:23.711 "optimal_io_boundary": 0 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "bdev_wait_for_examine" 00:21:23.711 } 00:21:23.711 ] 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "subsystem": "nbd", 00:21:23.711 "config": [] 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "subsystem": "scheduler", 00:21:23.711 "config": [ 00:21:23.711 { 00:21:23.711 "method": "framework_set_scheduler", 00:21:23.711 "params": { 00:21:23.711 "name": "static" 00:21:23.711 } 00:21:23.711 } 00:21:23.711 ] 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "subsystem": "nvmf", 00:21:23.711 "config": [ 00:21:23.711 { 00:21:23.711 "method": "nvmf_set_config", 00:21:23.711 "params": { 00:21:23.711 "discovery_filter": "match_any", 00:21:23.711 "admin_cmd_passthru": { 00:21:23.711 "identify_ctrlr": false 00:21:23.711 } 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_set_max_subsystems", 00:21:23.711 "params": { 00:21:23.711 "max_subsystems": 1024 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_set_crdt", 00:21:23.711 "params": { 00:21:23.711 "crdt1": 0, 00:21:23.711 "crdt2": 0, 00:21:23.711 "crdt3": 0 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_create_transport", 00:21:23.711 "params": { 00:21:23.711 "trtype": "TCP", 00:21:23.711 "max_queue_depth": 128, 00:21:23.711 "max_io_qpairs_per_ctrlr": 127, 00:21:23.711 "in_capsule_data_size": 4096, 00:21:23.711 "max_io_size": 131072, 00:21:23.711 "io_unit_size": 131072, 00:21:23.711 "max_aq_depth": 128, 00:21:23.711 "num_shared_buffers": 511, 00:21:23.711 "buf_cache_size": 4294967295, 00:21:23.711 "dif_insert_or_strip": false, 00:21:23.711 "zcopy": false, 00:21:23.711 "c2h_success": false, 00:21:23.711 "sock_priority": 0, 00:21:23.711 "abort_timeout_sec": 1 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_create_subsystem", 00:21:23.711 "params": { 00:21:23.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.711 "allow_any_host": false, 00:21:23.711 "serial_number": "SPDK00000000000001", 00:21:23.711 "model_number": "SPDK bdev Controller", 00:21:23.711 "max_namespaces": 10, 00:21:23.711 "min_cntlid": 1, 00:21:23.711 "max_cntlid": 65519, 00:21:23.711 "ana_reporting": false 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_subsystem_add_host", 00:21:23.711 "params": { 00:21:23.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.711 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.711 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_subsystem_add_ns", 00:21:23.711 "params": { 00:21:23.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.711 "namespace": { 00:21:23.711 "nsid": 1, 00:21:23.711 "bdev_name": "malloc0", 00:21:23.711 "nguid": "6DFC0C991BB94730BD438B5C3D8CACC8", 00:21:23.711 "uuid": "6dfc0c99-1bb9-4730-bd43-8b5c3d8cacc8" 00:21:23.711 } 00:21:23.711 } 00:21:23.711 }, 00:21:23.711 { 00:21:23.711 "method": "nvmf_subsystem_add_listener", 00:21:23.711 "params": { 00:21:23.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.711 "listen_address": { 00:21:23.711 "trtype": "TCP", 00:21:23.711 "adrfam": "IPv4", 00:21:23.711 "traddr": "10.0.0.2", 00:21:23.711 "trsvcid": "4420" 00:21:23.711 }, 00:21:23.711 "secure_channel": true 00:21:23.711 } 00:21:23.711 } 00:21:23.711 ] 00:21:23.711 } 00:21:23.711 ] 00:21:23.711 }' 00:21:23.711 17:58:27 -- nvmf/common.sh@469 -- # nvmfpid=1718237 00:21:23.711 17:58:27 -- nvmf/common.sh@470 -- # waitforlisten 1718237 00:21:23.711 17:58:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:23.711 17:58:27 -- common/autotest_common.sh@819 -- # '[' -z 1718237 ']' 00:21:23.711 17:58:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.711 17:58:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:23.711 17:58:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.711 17:58:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:23.711 17:58:27 -- common/autotest_common.sh@10 -- # set +x 00:21:23.711 [2024-07-22 17:58:27.783555] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:23.711 [2024-07-22 17:58:27.783642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.711 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.711 [2024-07-22 17:58:27.858927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.712 [2024-07-22 17:58:27.919910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:23.712 [2024-07-22 17:58:27.920023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.712 [2024-07-22 17:58:27.920030] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.712 [2024-07-22 17:58:27.920037] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.712 [2024-07-22 17:58:27.920053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.973 [2024-07-22 17:58:28.098011] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.973 [2024-07-22 17:58:28.130049] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.973 [2024-07-22 17:58:28.130229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.544 17:58:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:24.544 17:58:28 -- common/autotest_common.sh@852 -- # return 0 00:21:24.544 17:58:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:24.544 17:58:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:24.544 17:58:28 -- common/autotest_common.sh@10 -- # set +x 00:21:24.544 17:58:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.544 17:58:28 -- target/tls.sh@216 -- # bdevperf_pid=1718316 00:21:24.544 17:58:28 -- target/tls.sh@217 -- # waitforlisten 1718316 /var/tmp/bdevperf.sock 00:21:24.544 17:58:28 -- common/autotest_common.sh@819 -- # '[' -z 1718316 ']' 00:21:24.544 17:58:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.544 17:58:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.544 17:58:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.545 17:58:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.545 17:58:28 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:24.545 17:58:28 -- common/autotest_common.sh@10 -- # set +x 00:21:24.545 17:58:28 -- target/tls.sh@213 -- # echo '{ 00:21:24.545 "subsystems": [ 00:21:24.545 { 00:21:24.545 "subsystem": "iobuf", 00:21:24.545 "config": [ 00:21:24.545 { 00:21:24.545 "method": "iobuf_set_options", 00:21:24.545 "params": { 00:21:24.545 "small_pool_count": 8192, 00:21:24.545 "large_pool_count": 1024, 00:21:24.545 "small_bufsize": 8192, 00:21:24.545 "large_bufsize": 135168 00:21:24.545 } 00:21:24.545 } 00:21:24.545 ] 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "subsystem": "sock", 00:21:24.545 "config": [ 00:21:24.545 { 00:21:24.545 "method": "sock_impl_set_options", 00:21:24.545 "params": { 00:21:24.545 "impl_name": "posix", 00:21:24.545 "recv_buf_size": 2097152, 00:21:24.545 "send_buf_size": 2097152, 00:21:24.545 "enable_recv_pipe": true, 00:21:24.545 "enable_quickack": false, 00:21:24.545 "enable_placement_id": 0, 00:21:24.545 "enable_zerocopy_send_server": true, 00:21:24.545 "enable_zerocopy_send_client": false, 00:21:24.545 "zerocopy_threshold": 0, 00:21:24.545 "tls_version": 0, 00:21:24.545 "enable_ktls": false 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "sock_impl_set_options", 00:21:24.545 "params": { 00:21:24.545 "impl_name": "ssl", 00:21:24.545 "recv_buf_size": 4096, 00:21:24.545 "send_buf_size": 4096, 00:21:24.545 "enable_recv_pipe": true, 00:21:24.545 "enable_quickack": false, 00:21:24.545 "enable_placement_id": 0, 00:21:24.545 "enable_zerocopy_send_server": true, 00:21:24.545 "enable_zerocopy_send_client": false, 00:21:24.545 "zerocopy_threshold": 0, 00:21:24.545 "tls_version": 0, 00:21:24.545 "enable_ktls": false 00:21:24.545 } 00:21:24.545 } 00:21:24.545 ] 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "subsystem": "vmd", 00:21:24.545 "config": [] 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "subsystem": "accel", 00:21:24.545 "config": [ 00:21:24.545 { 00:21:24.545 "method": "accel_set_options", 00:21:24.545 "params": { 00:21:24.545 "small_cache_size": 128, 00:21:24.545 "large_cache_size": 16, 00:21:24.545 "task_count": 2048, 00:21:24.545 "sequence_count": 2048, 00:21:24.545 "buf_count": 2048 00:21:24.545 } 00:21:24.545 } 00:21:24.545 ] 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "subsystem": "bdev", 00:21:24.545 "config": [ 00:21:24.545 { 00:21:24.545 "method": "bdev_set_options", 00:21:24.545 "params": { 00:21:24.545 "bdev_io_pool_size": 65535, 00:21:24.545 "bdev_io_cache_size": 256, 00:21:24.545 "bdev_auto_examine": true, 00:21:24.545 "iobuf_small_cache_size": 128, 00:21:24.545 "iobuf_large_cache_size": 16 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "bdev_raid_set_options", 00:21:24.545 "params": { 00:21:24.545 "process_window_size_kb": 1024 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "bdev_iscsi_set_options", 00:21:24.545 "params": { 00:21:24.545 "timeout_sec": 30 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "bdev_nvme_set_options", 00:21:24.545 "params": { 00:21:24.545 "action_on_timeout": "none", 00:21:24.545 "timeout_us": 0, 00:21:24.545 "timeout_admin_us": 0, 00:21:24.545 "keep_alive_timeout_ms": 10000, 00:21:24.545 "transport_retry_count": 4, 00:21:24.545 "arbitration_burst": 0, 00:21:24.545 "low_priority_weight": 0, 00:21:24.545 "medium_priority_weight": 0, 00:21:24.545 "high_priority_weight": 0, 00:21:24.545 "nvme_adminq_poll_period_us": 10000, 00:21:24.545 "nvme_ioq_poll_period_us": 0, 00:21:24.545 "io_queue_requests": 512, 00:21:24.545 "delay_cmd_submit": true, 00:21:24.545 "bdev_retry_count": 3, 00:21:24.545 "transport_ack_timeout": 0, 00:21:24.545 "ctrlr_loss_timeout_sec": 0, 00:21:24.545 "reconnect_delay_sec": 0, 00:21:24.545 "fast_io_fail_timeout_sec": 0, 00:21:24.545 "generate_uuids": false, 00:21:24.545 "transport_tos": 0, 00:21:24.545 "io_path_stat": false, 00:21:24.545 "allow_accel_sequence": false 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "bdev_nvme_attach_controller", 00:21:24.545 "params": { 00:21:24.545 "name": "TLSTEST", 00:21:24.545 "trtype": "TCP", 00:21:24.545 "adrfam": "IPv4", 00:21:24.545 "traddr": "10.0.0.2", 00:21:24.545 "trsvcid": "4420", 00:21:24.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.545 "prchk_reftag": false, 00:21:24.545 "prchk_guard": false, 00:21:24.545 "ctrlr_loss_timeout_sec": 0, 00:21:24.545 "reconnect_delay_sec": 0, 00:21:24.545 "fast_io_fail_timeout_sec": 0, 00:21:24.545 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:21:24.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.545 "hdgst": false, 00:21:24.545 "ddgst": false 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "bdev_nvme_set_hotplug", 00:21:24.545 "params": { 00:21:24.545 "period_us": 100000, 00:21:24.545 "enable": false 00:21:24.545 } 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "method": "bdev_wait_for_examine" 00:21:24.545 } 00:21:24.545 ] 00:21:24.545 }, 00:21:24.545 { 00:21:24.545 "subsystem": "nbd", 00:21:24.545 "config": [] 00:21:24.545 } 00:21:24.545 ] 00:21:24.545 }' 00:21:24.545 [2024-07-22 17:58:28.688908] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:24.545 [2024-07-22 17:58:28.688956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718316 ] 00:21:24.545 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.545 [2024-07-22 17:58:28.742542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.545 [2024-07-22 17:58:28.793675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.806 [2024-07-22 17:58:28.908595] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.377 17:58:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:25.377 17:58:29 -- common/autotest_common.sh@852 -- # return 0 00:21:25.377 17:58:29 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:25.377 Running I/O for 10 seconds... 00:21:37.612 00:21:37.612 Latency(us) 00:21:37.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.612 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:37.612 Verification LBA range: start 0x0 length 0x2000 00:21:37.612 TLSTESTn1 : 10.03 5143.15 20.09 0.00 0.00 24847.74 3453.24 53638.70 00:21:37.612 =================================================================================================================== 00:21:37.612 Total : 5143.15 20.09 0.00 0.00 24847.74 3453.24 53638.70 00:21:37.612 0 00:21:37.612 17:58:39 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:37.612 17:58:39 -- target/tls.sh@223 -- # killprocess 1718316 00:21:37.612 17:58:39 -- common/autotest_common.sh@926 -- # '[' -z 1718316 ']' 00:21:37.612 17:58:39 -- common/autotest_common.sh@930 -- # kill -0 1718316 00:21:37.612 17:58:39 -- common/autotest_common.sh@931 -- # uname 00:21:37.612 17:58:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.612 17:58:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1718316 00:21:37.612 17:58:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:37.612 17:58:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:37.612 17:58:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1718316' 00:21:37.612 killing process with pid 1718316 00:21:37.612 17:58:39 -- common/autotest_common.sh@945 -- # kill 1718316 00:21:37.612 Received shutdown signal, test time was about 10.000000 seconds 00:21:37.612 00:21:37.612 Latency(us) 00:21:37.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.612 =================================================================================================================== 00:21:37.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.612 17:58:39 -- common/autotest_common.sh@950 -- # wait 1718316 00:21:37.612 17:58:39 -- target/tls.sh@224 -- # killprocess 1718237 00:21:37.612 17:58:39 -- common/autotest_common.sh@926 -- # '[' -z 1718237 ']' 00:21:37.612 17:58:39 -- common/autotest_common.sh@930 -- # kill -0 1718237 00:21:37.612 17:58:39 -- common/autotest_common.sh@931 -- # uname 00:21:37.612 17:58:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.612 17:58:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1718237 00:21:37.612 17:58:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:37.612 17:58:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:37.612 17:58:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1718237' 00:21:37.612 killing process with pid 1718237 00:21:37.612 17:58:39 -- common/autotest_common.sh@945 -- # kill 1718237 00:21:37.612 17:58:39 -- common/autotest_common.sh@950 -- # wait 1718237 00:21:37.612 17:58:40 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:37.612 17:58:40 -- target/tls.sh@227 -- # cleanup 00:21:37.612 17:58:40 -- target/tls.sh@15 -- # process_shm --id 0 00:21:37.612 17:58:40 -- common/autotest_common.sh@796 -- # type=--id 00:21:37.612 17:58:40 -- common/autotest_common.sh@797 -- # id=0 00:21:37.612 17:58:40 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:37.612 17:58:40 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:37.612 17:58:40 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:37.612 17:58:40 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:37.612 17:58:40 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:37.612 17:58:40 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:37.612 nvmf_trace.0 00:21:37.612 17:58:40 -- common/autotest_common.sh@811 -- # return 0 00:21:37.612 17:58:40 -- target/tls.sh@16 -- # killprocess 1718316 00:21:37.612 17:58:40 -- common/autotest_common.sh@926 -- # '[' -z 1718316 ']' 00:21:37.612 17:58:40 -- common/autotest_common.sh@930 -- # kill -0 1718316 00:21:37.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1718316) - No such process 00:21:37.612 17:58:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1718316 is not found' 00:21:37.612 Process with pid 1718316 is not found 00:21:37.612 17:58:40 -- target/tls.sh@17 -- # nvmftestfini 00:21:37.612 17:58:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:37.612 17:58:40 -- nvmf/common.sh@116 -- # sync 00:21:37.612 17:58:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:37.612 17:58:40 -- nvmf/common.sh@119 -- # set +e 00:21:37.612 17:58:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:37.612 17:58:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:37.612 rmmod nvme_tcp 00:21:37.612 rmmod nvme_fabrics 00:21:37.612 rmmod nvme_keyring 00:21:37.612 17:58:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:37.612 17:58:40 -- nvmf/common.sh@123 -- # set -e 00:21:37.612 17:58:40 -- nvmf/common.sh@124 -- # return 0 00:21:37.612 17:58:40 -- nvmf/common.sh@477 -- # '[' -n 1718237 ']' 00:21:37.612 17:58:40 -- nvmf/common.sh@478 -- # killprocess 1718237 00:21:37.612 17:58:40 -- common/autotest_common.sh@926 -- # '[' -z 1718237 ']' 00:21:37.612 17:58:40 -- common/autotest_common.sh@930 -- # kill -0 1718237 00:21:37.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1718237) - No such process 00:21:37.612 17:58:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1718237 is not found' 00:21:37.612 Process with pid 1718237 is not found 00:21:37.612 17:58:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.612 17:58:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.612 17:58:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.612 17:58:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.612 17:58:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.612 17:58:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.612 17:58:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.612 17:58:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.185 17:58:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:38.185 17:58:42 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:38.185 00:21:38.185 real 1m14.840s 00:21:38.185 user 1m53.595s 00:21:38.185 sys 0m23.645s 00:21:38.185 17:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.185 17:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:38.185 ************************************ 00:21:38.185 END TEST nvmf_tls 00:21:38.185 ************************************ 00:21:38.185 17:58:42 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:38.185 17:58:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:38.185 17:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:38.185 17:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:38.185 ************************************ 00:21:38.185 START TEST nvmf_fips 00:21:38.185 ************************************ 00:21:38.185 17:58:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:38.185 * Looking for test storage... 00:21:38.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:38.185 17:58:42 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.185 17:58:42 -- nvmf/common.sh@7 -- # uname -s 00:21:38.185 17:58:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.185 17:58:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.185 17:58:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.185 17:58:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.185 17:58:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.185 17:58:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.185 17:58:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.185 17:58:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.185 17:58:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.185 17:58:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.185 17:58:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:38.185 17:58:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:21:38.185 17:58:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.185 17:58:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.185 17:58:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.185 17:58:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.185 17:58:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.185 17:58:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.185 17:58:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.185 17:58:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.185 17:58:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.185 17:58:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.185 17:58:42 -- paths/export.sh@5 -- # export PATH 00:21:38.185 17:58:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.185 17:58:42 -- nvmf/common.sh@46 -- # : 0 00:21:38.185 17:58:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:38.185 17:58:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:38.185 17:58:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:38.185 17:58:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.185 17:58:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.185 17:58:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:38.185 17:58:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:38.185 17:58:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:38.185 17:58:42 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:38.185 17:58:42 -- fips/fips.sh@89 -- # check_openssl_version 00:21:38.185 17:58:42 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:38.185 17:58:42 -- fips/fips.sh@85 -- # openssl version 00:21:38.185 17:58:42 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:38.185 17:58:42 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:38.185 17:58:42 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:38.185 17:58:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:38.185 17:58:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:38.185 17:58:42 -- scripts/common.sh@335 -- # IFS=.-: 00:21:38.185 17:58:42 -- scripts/common.sh@335 -- # read -ra ver1 00:21:38.185 17:58:42 -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.186 17:58:42 -- scripts/common.sh@336 -- # read -ra ver2 00:21:38.186 17:58:42 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:38.186 17:58:42 -- scripts/common.sh@339 -- # ver1_l=3 00:21:38.186 17:58:42 -- scripts/common.sh@340 -- # ver2_l=3 00:21:38.186 17:58:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:38.186 17:58:42 -- scripts/common.sh@343 -- # case "$op" in 00:21:38.186 17:58:42 -- scripts/common.sh@347 -- # : 1 00:21:38.186 17:58:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:38.186 17:58:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.447 17:58:42 -- scripts/common.sh@364 -- # decimal 3 00:21:38.447 17:58:42 -- scripts/common.sh@352 -- # local d=3 00:21:38.447 17:58:42 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:38.447 17:58:42 -- scripts/common.sh@354 -- # echo 3 00:21:38.447 17:58:42 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:38.447 17:58:42 -- scripts/common.sh@365 -- # decimal 3 00:21:38.447 17:58:42 -- scripts/common.sh@352 -- # local d=3 00:21:38.447 17:58:42 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:38.447 17:58:42 -- scripts/common.sh@354 -- # echo 3 00:21:38.447 17:58:42 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:38.447 17:58:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:38.447 17:58:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:38.447 17:58:42 -- scripts/common.sh@363 -- # (( v++ )) 00:21:38.447 17:58:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.447 17:58:42 -- scripts/common.sh@364 -- # decimal 0 00:21:38.447 17:58:42 -- scripts/common.sh@352 -- # local d=0 00:21:38.447 17:58:42 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:38.447 17:58:42 -- scripts/common.sh@354 -- # echo 0 00:21:38.447 17:58:42 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:38.447 17:58:42 -- scripts/common.sh@365 -- # decimal 0 00:21:38.447 17:58:42 -- scripts/common.sh@352 -- # local d=0 00:21:38.447 17:58:42 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:38.447 17:58:42 -- scripts/common.sh@354 -- # echo 0 00:21:38.447 17:58:42 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:38.447 17:58:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:38.447 17:58:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:38.447 17:58:42 -- scripts/common.sh@363 -- # (( v++ )) 00:21:38.447 17:58:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.447 17:58:42 -- scripts/common.sh@364 -- # decimal 9 00:21:38.447 17:58:42 -- scripts/common.sh@352 -- # local d=9 00:21:38.447 17:58:42 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:38.447 17:58:42 -- scripts/common.sh@354 -- # echo 9 00:21:38.447 17:58:42 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:38.447 17:58:42 -- scripts/common.sh@365 -- # decimal 0 00:21:38.447 17:58:42 -- scripts/common.sh@352 -- # local d=0 00:21:38.447 17:58:42 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:38.447 17:58:42 -- scripts/common.sh@354 -- # echo 0 00:21:38.447 17:58:42 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:38.447 17:58:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:38.447 17:58:42 -- scripts/common.sh@366 -- # return 0 00:21:38.447 17:58:42 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:38.447 17:58:42 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:38.447 17:58:42 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:38.447 17:58:42 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:38.447 17:58:42 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:38.447 17:58:42 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:38.447 17:58:42 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:38.447 17:58:42 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:38.447 17:58:42 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:38.447 17:58:42 -- fips/fips.sh@114 -- # build_openssl_config 00:21:38.447 17:58:42 -- fips/fips.sh@37 -- # cat 00:21:38.447 17:58:42 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:38.447 17:58:42 -- fips/fips.sh@58 -- # cat - 00:21:38.447 17:58:42 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:38.447 17:58:42 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:38.447 17:58:42 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:38.447 17:58:42 -- fips/fips.sh@117 -- # grep name 00:21:38.447 17:58:42 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:38.447 17:58:42 -- fips/fips.sh@117 -- # openssl list -providers 00:21:38.447 17:58:42 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:38.447 17:58:42 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:38.448 17:58:42 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:38.448 17:58:42 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:38.448 17:58:42 -- fips/fips.sh@128 -- # : 00:21:38.448 17:58:42 -- common/autotest_common.sh@640 -- # local es=0 00:21:38.448 17:58:42 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:38.448 17:58:42 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:38.448 17:58:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:38.448 17:58:42 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:38.448 17:58:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:38.448 17:58:42 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:38.448 17:58:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:38.448 17:58:42 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:38.448 17:58:42 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:38.448 17:58:42 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:38.448 Error setting digest 00:21:38.448 00E2CD0D8C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:38.448 00E2CD0D8C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:38.448 17:58:42 -- common/autotest_common.sh@643 -- # es=1 00:21:38.448 17:58:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:38.448 17:58:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:38.448 17:58:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:38.448 17:58:42 -- fips/fips.sh@131 -- # nvmftestinit 00:21:38.448 17:58:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:38.448 17:58:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.448 17:58:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:38.448 17:58:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:38.448 17:58:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:38.448 17:58:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.448 17:58:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.448 17:58:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.448 17:58:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:38.448 17:58:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:38.448 17:58:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:38.448 17:58:42 -- common/autotest_common.sh@10 -- # set +x 00:21:46.589 17:58:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:46.589 17:58:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:46.589 17:58:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:46.589 17:58:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:46.589 17:58:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:46.589 17:58:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:46.589 17:58:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:46.589 17:58:50 -- nvmf/common.sh@294 -- # net_devs=() 00:21:46.589 17:58:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:46.589 17:58:50 -- nvmf/common.sh@295 -- # e810=() 00:21:46.589 17:58:50 -- nvmf/common.sh@295 -- # local -ga e810 00:21:46.589 17:58:50 -- nvmf/common.sh@296 -- # x722=() 00:21:46.589 17:58:50 -- nvmf/common.sh@296 -- # local -ga x722 00:21:46.589 17:58:50 -- nvmf/common.sh@297 -- # mlx=() 00:21:46.589 17:58:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:46.589 17:58:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.589 17:58:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:46.589 17:58:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:46.589 17:58:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:46.589 17:58:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:46.589 17:58:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:46.589 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:46.589 17:58:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:46.589 17:58:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:46.589 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:46.589 17:58:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:46.589 17:58:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:46.589 17:58:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.589 17:58:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:46.589 17:58:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.589 17:58:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:46.589 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:46.589 17:58:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.589 17:58:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:46.589 17:58:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.589 17:58:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:46.589 17:58:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.589 17:58:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:46.589 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:46.589 17:58:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.589 17:58:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:46.589 17:58:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:46.589 17:58:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:46.589 17:58:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.589 17:58:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.589 17:58:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.589 17:58:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:46.589 17:58:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.589 17:58:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.589 17:58:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:46.589 17:58:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.589 17:58:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.589 17:58:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:46.589 17:58:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:46.589 17:58:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.589 17:58:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.589 17:58:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.589 17:58:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.589 17:58:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:46.589 17:58:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.589 17:58:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.589 17:58:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.589 17:58:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:46.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.864 ms 00:21:46.589 00:21:46.589 --- 10.0.0.2 ping statistics --- 00:21:46.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.589 rtt min/avg/max/mdev = 0.864/0.864/0.864/0.000 ms 00:21:46.589 17:58:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:21:46.589 00:21:46.589 --- 10.0.0.1 ping statistics --- 00:21:46.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.589 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:21:46.589 17:58:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.589 17:58:50 -- nvmf/common.sh@410 -- # return 0 00:21:46.589 17:58:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:46.589 17:58:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.589 17:58:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:46.589 17:58:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.589 17:58:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:46.589 17:58:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:46.589 17:58:50 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:46.589 17:58:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:46.589 17:58:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:46.589 17:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:46.589 17:58:50 -- nvmf/common.sh@469 -- # nvmfpid=1724700 00:21:46.589 17:58:50 -- nvmf/common.sh@470 -- # waitforlisten 1724700 00:21:46.590 17:58:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.590 17:58:50 -- common/autotest_common.sh@819 -- # '[' -z 1724700 ']' 00:21:46.590 17:58:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.590 17:58:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.590 17:58:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.590 17:58:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.590 17:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:46.590 [2024-07-22 17:58:50.782457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:46.590 [2024-07-22 17:58:50.782523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.590 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.590 [2024-07-22 17:58:50.856280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.849 [2024-07-22 17:58:50.924309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.849 [2024-07-22 17:58:50.924434] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.849 [2024-07-22 17:58:50.924442] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.849 [2024-07-22 17:58:50.924449] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.849 [2024-07-22 17:58:50.924471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.421 17:58:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.421 17:58:51 -- common/autotest_common.sh@852 -- # return 0 00:21:47.421 17:58:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:47.421 17:58:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.421 17:58:51 -- common/autotest_common.sh@10 -- # set +x 00:21:47.421 17:58:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.421 17:58:51 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:47.421 17:58:51 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:47.421 17:58:51 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.421 17:58:51 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:47.421 17:58:51 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.421 17:58:51 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.421 17:58:51 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:47.421 17:58:51 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:47.682 [2024-07-22 17:58:51.806337] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.682 [2024-07-22 17:58:51.822358] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:47.682 [2024-07-22 17:58:51.822524] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.682 malloc0 00:21:47.682 17:58:51 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.682 17:58:51 -- fips/fips.sh@148 -- # bdevperf_pid=1724826 00:21:47.682 17:58:51 -- fips/fips.sh@149 -- # waitforlisten 1724826 /var/tmp/bdevperf.sock 00:21:47.682 17:58:51 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.682 17:58:51 -- common/autotest_common.sh@819 -- # '[' -z 1724826 ']' 00:21:47.682 17:58:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.682 17:58:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.682 17:58:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.682 17:58:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.682 17:58:51 -- common/autotest_common.sh@10 -- # set +x 00:21:47.682 [2024-07-22 17:58:51.939302] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:47.682 [2024-07-22 17:58:51.939358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724826 ] 00:21:47.944 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.944 [2024-07-22 17:58:51.993271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.944 [2024-07-22 17:58:52.044664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.515 17:58:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.515 17:58:52 -- common/autotest_common.sh@852 -- # return 0 00:21:48.515 17:58:52 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:48.775 [2024-07-22 17:58:52.917641] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.775 TLSTESTn1 00:21:48.775 17:58:53 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:49.037 Running I/O for 10 seconds... 00:21:59.066 00:21:59.066 Latency(us) 00:21:59.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.066 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:59.066 Verification LBA range: start 0x0 length 0x2000 00:21:59.066 TLSTESTn1 : 10.01 6693.01 26.14 0.00 0.00 19104.09 3554.07 51218.90 00:21:59.066 =================================================================================================================== 00:21:59.066 Total : 6693.01 26.14 0.00 0.00 19104.09 3554.07 51218.90 00:21:59.066 0 00:21:59.066 17:59:03 -- fips/fips.sh@1 -- # cleanup 00:21:59.066 17:59:03 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:59.066 17:59:03 -- common/autotest_common.sh@796 -- # type=--id 00:21:59.066 17:59:03 -- common/autotest_common.sh@797 -- # id=0 00:21:59.066 17:59:03 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:59.066 17:59:03 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:59.066 17:59:03 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:59.066 17:59:03 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:59.066 17:59:03 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:59.066 17:59:03 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:59.066 nvmf_trace.0 00:21:59.066 17:59:03 -- common/autotest_common.sh@811 -- # return 0 00:21:59.066 17:59:03 -- fips/fips.sh@16 -- # killprocess 1724826 00:21:59.066 17:59:03 -- common/autotest_common.sh@926 -- # '[' -z 1724826 ']' 00:21:59.066 17:59:03 -- common/autotest_common.sh@930 -- # kill -0 1724826 00:21:59.066 17:59:03 -- common/autotest_common.sh@931 -- # uname 00:21:59.066 17:59:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.066 17:59:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1724826 00:21:59.066 17:59:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:59.066 17:59:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:59.066 17:59:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1724826' 00:21:59.066 killing process with pid 1724826 00:21:59.066 17:59:03 -- common/autotest_common.sh@945 -- # kill 1724826 00:21:59.066 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.066 00:21:59.066 Latency(us) 00:21:59.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.066 =================================================================================================================== 00:21:59.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.066 17:59:03 -- common/autotest_common.sh@950 -- # wait 1724826 00:21:59.327 17:59:03 -- fips/fips.sh@17 -- # nvmftestfini 00:21:59.327 17:59:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:59.327 17:59:03 -- nvmf/common.sh@116 -- # sync 00:21:59.327 17:59:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:59.327 17:59:03 -- nvmf/common.sh@119 -- # set +e 00:21:59.327 17:59:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:59.327 17:59:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:59.327 rmmod nvme_tcp 00:21:59.327 rmmod nvme_fabrics 00:21:59.327 rmmod nvme_keyring 00:21:59.327 17:59:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:59.327 17:59:03 -- nvmf/common.sh@123 -- # set -e 00:21:59.327 17:59:03 -- nvmf/common.sh@124 -- # return 0 00:21:59.327 17:59:03 -- nvmf/common.sh@477 -- # '[' -n 1724700 ']' 00:21:59.327 17:59:03 -- nvmf/common.sh@478 -- # killprocess 1724700 00:21:59.327 17:59:03 -- common/autotest_common.sh@926 -- # '[' -z 1724700 ']' 00:21:59.327 17:59:03 -- common/autotest_common.sh@930 -- # kill -0 1724700 00:21:59.327 17:59:03 -- common/autotest_common.sh@931 -- # uname 00:21:59.327 17:59:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.327 17:59:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1724700 00:21:59.327 17:59:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:59.327 17:59:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:59.327 17:59:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1724700' 00:21:59.327 killing process with pid 1724700 00:21:59.327 17:59:03 -- common/autotest_common.sh@945 -- # kill 1724700 00:21:59.327 17:59:03 -- common/autotest_common.sh@950 -- # wait 1724700 00:21:59.589 17:59:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:59.589 17:59:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:59.589 17:59:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:59.589 17:59:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.589 17:59:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:59.589 17:59:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.589 17:59:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.589 17:59:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.501 17:59:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:01.501 17:59:05 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:01.501 00:22:01.501 real 0m23.404s 00:22:01.501 user 0m25.110s 00:22:01.501 sys 0m9.148s 00:22:01.501 17:59:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.501 17:59:05 -- common/autotest_common.sh@10 -- # set +x 00:22:01.501 ************************************ 00:22:01.501 END TEST nvmf_fips 00:22:01.501 ************************************ 00:22:01.501 17:59:05 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:22:01.501 17:59:05 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:01.501 17:59:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:01.501 17:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:01.501 17:59:05 -- common/autotest_common.sh@10 -- # set +x 00:22:01.501 ************************************ 00:22:01.501 START TEST nvmf_fuzz 00:22:01.501 ************************************ 00:22:01.501 17:59:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:22:01.763 * Looking for test storage... 00:22:01.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.763 17:59:05 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.763 17:59:05 -- nvmf/common.sh@7 -- # uname -s 00:22:01.763 17:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.763 17:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.763 17:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.763 17:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.763 17:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.763 17:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.763 17:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.763 17:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.763 17:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.763 17:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.763 17:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:01.763 17:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:01.763 17:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.763 17:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.763 17:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.763 17:59:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.763 17:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.763 17:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.763 17:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.763 17:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.763 17:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.763 17:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.763 17:59:05 -- paths/export.sh@5 -- # export PATH 00:22:01.763 17:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.763 17:59:05 -- nvmf/common.sh@46 -- # : 0 00:22:01.763 17:59:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:01.763 17:59:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:01.763 17:59:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:01.763 17:59:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.763 17:59:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.763 17:59:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:01.763 17:59:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:01.763 17:59:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:01.763 17:59:05 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:22:01.763 17:59:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:01.763 17:59:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.763 17:59:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:01.763 17:59:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:01.763 17:59:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:01.763 17:59:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.763 17:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.763 17:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.763 17:59:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:01.763 17:59:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:01.763 17:59:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:01.763 17:59:05 -- common/autotest_common.sh@10 -- # set +x 00:22:09.909 17:59:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:09.909 17:59:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:09.909 17:59:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:09.909 17:59:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:09.909 17:59:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:09.909 17:59:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:09.909 17:59:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:09.909 17:59:13 -- nvmf/common.sh@294 -- # net_devs=() 00:22:09.909 17:59:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:09.909 17:59:13 -- nvmf/common.sh@295 -- # e810=() 00:22:09.909 17:59:13 -- nvmf/common.sh@295 -- # local -ga e810 00:22:09.909 17:59:13 -- nvmf/common.sh@296 -- # x722=() 00:22:09.909 17:59:13 -- nvmf/common.sh@296 -- # local -ga x722 00:22:09.909 17:59:13 -- nvmf/common.sh@297 -- # mlx=() 00:22:09.909 17:59:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:09.909 17:59:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.909 17:59:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:09.909 17:59:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:09.909 17:59:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:09.909 17:59:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:09.909 17:59:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.909 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.909 17:59:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:09.909 17:59:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.909 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.909 17:59:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:09.909 17:59:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:09.909 17:59:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.909 17:59:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:09.909 17:59:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.909 17:59:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.909 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.909 17:59:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.909 17:59:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:09.909 17:59:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.909 17:59:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:09.909 17:59:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.909 17:59:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.909 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.909 17:59:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.909 17:59:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:09.909 17:59:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:09.909 17:59:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:09.909 17:59:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:09.909 17:59:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.909 17:59:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.909 17:59:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.909 17:59:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:09.909 17:59:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.909 17:59:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.909 17:59:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:09.909 17:59:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.909 17:59:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.909 17:59:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:09.910 17:59:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:09.910 17:59:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.910 17:59:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.910 17:59:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.910 17:59:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.910 17:59:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:09.910 17:59:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.910 17:59:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.910 17:59:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.910 17:59:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:09.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:22:09.910 00:22:09.910 --- 10.0.0.2 ping statistics --- 00:22:09.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.910 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:22:09.910 17:59:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:22:09.910 00:22:09.910 --- 10.0.0.1 ping statistics --- 00:22:09.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.910 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:22:09.910 17:59:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.910 17:59:14 -- nvmf/common.sh@410 -- # return 0 00:22:09.910 17:59:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:09.910 17:59:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.910 17:59:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:09.910 17:59:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:09.910 17:59:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.910 17:59:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:09.910 17:59:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:09.910 17:59:14 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1731252 00:22:09.910 17:59:14 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:09.910 17:59:14 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:09.910 17:59:14 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1731252 00:22:09.910 17:59:14 -- common/autotest_common.sh@819 -- # '[' -z 1731252 ']' 00:22:09.910 17:59:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.910 17:59:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.910 17:59:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.910 17:59:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.910 17:59:14 -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 17:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.851 17:59:14 -- common/autotest_common.sh@852 -- # return 0 00:22:10.851 17:59:14 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.851 17:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.851 17:59:14 -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 17:59:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.851 17:59:14 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:22:10.851 17:59:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.851 17:59:14 -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 Malloc0 00:22:10.851 17:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.851 17:59:15 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.851 17:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.851 17:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 17:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.851 17:59:15 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:10.851 17:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.851 17:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 17:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.851 17:59:15 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.851 17:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.851 17:59:15 -- common/autotest_common.sh@10 -- # set +x 00:22:10.851 17:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.851 17:59:15 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:22:10.851 17:59:15 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:43.071 Fuzzing completed. Shutting down the fuzz application 00:22:43.071 00:22:43.071 Dumping successful admin opcodes: 00:22:43.071 8, 9, 10, 24, 00:22:43.071 Dumping successful io opcodes: 00:22:43.071 0, 9, 00:22:43.071 NS: 0x200003aeff00 I/O qp, Total commands completed: 937992, total successful commands: 5474, random_seed: 914035776 00:22:43.071 NS: 0x200003aeff00 admin qp, Total commands completed: 121744, total successful commands: 1000, random_seed: 1297003136 00:22:43.071 17:59:45 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:43.071 Fuzzing completed. Shutting down the fuzz application 00:22:43.071 00:22:43.071 Dumping successful admin opcodes: 00:22:43.071 24, 00:22:43.071 Dumping successful io opcodes: 00:22:43.071 00:22:43.071 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1399150709 00:22:43.071 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1399269625 00:22:43.071 17:59:46 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.071 17:59:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.071 17:59:46 -- common/autotest_common.sh@10 -- # set +x 00:22:43.071 17:59:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.071 17:59:46 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:43.071 17:59:46 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:43.071 17:59:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:43.071 17:59:46 -- nvmf/common.sh@116 -- # sync 00:22:43.071 17:59:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:43.071 17:59:46 -- nvmf/common.sh@119 -- # set +e 00:22:43.071 17:59:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:43.071 17:59:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:43.071 rmmod nvme_tcp 00:22:43.071 rmmod nvme_fabrics 00:22:43.071 rmmod nvme_keyring 00:22:43.071 17:59:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:43.071 17:59:46 -- nvmf/common.sh@123 -- # set -e 00:22:43.071 17:59:46 -- nvmf/common.sh@124 -- # return 0 00:22:43.071 17:59:46 -- nvmf/common.sh@477 -- # '[' -n 1731252 ']' 00:22:43.071 17:59:46 -- nvmf/common.sh@478 -- # killprocess 1731252 00:22:43.071 17:59:46 -- common/autotest_common.sh@926 -- # '[' -z 1731252 ']' 00:22:43.071 17:59:46 -- common/autotest_common.sh@930 -- # kill -0 1731252 00:22:43.071 17:59:46 -- common/autotest_common.sh@931 -- # uname 00:22:43.071 17:59:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.071 17:59:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1731252 00:22:43.071 17:59:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:43.071 17:59:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:43.071 17:59:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1731252' 00:22:43.071 killing process with pid 1731252 00:22:43.071 17:59:46 -- common/autotest_common.sh@945 -- # kill 1731252 00:22:43.071 17:59:46 -- common/autotest_common.sh@950 -- # wait 1731252 00:22:43.071 17:59:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:43.071 17:59:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:43.071 17:59:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:43.071 17:59:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.071 17:59:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:43.071 17:59:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.071 17:59:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:43.071 17:59:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.983 17:59:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:44.983 17:59:48 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:44.983 00:22:44.983 real 0m43.230s 00:22:44.983 user 0m57.220s 00:22:44.983 sys 0m15.399s 00:22:44.983 17:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.983 17:59:48 -- common/autotest_common.sh@10 -- # set +x 00:22:44.983 ************************************ 00:22:44.983 END TEST nvmf_fuzz 00:22:44.983 ************************************ 00:22:44.983 17:59:49 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:44.983 17:59:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:44.983 17:59:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.983 17:59:49 -- common/autotest_common.sh@10 -- # set +x 00:22:44.983 ************************************ 00:22:44.983 START TEST nvmf_multiconnection 00:22:44.983 ************************************ 00:22:44.983 17:59:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:44.983 * Looking for test storage... 00:22:44.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.983 17:59:49 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.983 17:59:49 -- nvmf/common.sh@7 -- # uname -s 00:22:44.983 17:59:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.983 17:59:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.983 17:59:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.983 17:59:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.983 17:59:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.983 17:59:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.983 17:59:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.983 17:59:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.983 17:59:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.983 17:59:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.983 17:59:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:44.983 17:59:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:22:44.983 17:59:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.983 17:59:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.983 17:59:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.983 17:59:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.983 17:59:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.983 17:59:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.983 17:59:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.983 17:59:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.983 17:59:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.983 17:59:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.983 17:59:49 -- paths/export.sh@5 -- # export PATH 00:22:44.983 17:59:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.983 17:59:49 -- nvmf/common.sh@46 -- # : 0 00:22:44.983 17:59:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:44.983 17:59:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:44.983 17:59:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:44.983 17:59:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.983 17:59:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.983 17:59:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:44.983 17:59:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:44.983 17:59:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:44.983 17:59:49 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.983 17:59:49 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.983 17:59:49 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:44.983 17:59:49 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:44.983 17:59:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:44.983 17:59:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.983 17:59:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:44.984 17:59:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:44.984 17:59:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:44.984 17:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.984 17:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.984 17:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.984 17:59:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:44.984 17:59:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:44.984 17:59:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:44.984 17:59:49 -- common/autotest_common.sh@10 -- # set +x 00:22:53.129 17:59:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:53.129 17:59:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:53.129 17:59:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:53.129 17:59:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:53.129 17:59:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:53.129 17:59:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:53.129 17:59:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:53.129 17:59:56 -- nvmf/common.sh@294 -- # net_devs=() 00:22:53.129 17:59:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:53.129 17:59:56 -- nvmf/common.sh@295 -- # e810=() 00:22:53.129 17:59:56 -- nvmf/common.sh@295 -- # local -ga e810 00:22:53.129 17:59:56 -- nvmf/common.sh@296 -- # x722=() 00:22:53.129 17:59:56 -- nvmf/common.sh@296 -- # local -ga x722 00:22:53.129 17:59:56 -- nvmf/common.sh@297 -- # mlx=() 00:22:53.129 17:59:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:53.129 17:59:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.129 17:59:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:53.129 17:59:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:53.129 17:59:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:53.129 17:59:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:53.129 17:59:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:53.129 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:53.129 17:59:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:53.129 17:59:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:53.129 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:53.129 17:59:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:53.129 17:59:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:53.129 17:59:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.129 17:59:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:53.129 17:59:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.129 17:59:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:53.129 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:53.129 17:59:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.129 17:59:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:53.129 17:59:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.129 17:59:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:53.129 17:59:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.129 17:59:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:53.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:53.129 17:59:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.129 17:59:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:53.129 17:59:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:53.129 17:59:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:53.129 17:59:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:53.129 17:59:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.129 17:59:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.129 17:59:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.129 17:59:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:53.129 17:59:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.129 17:59:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.129 17:59:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:53.129 17:59:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.129 17:59:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.129 17:59:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:53.129 17:59:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:53.130 17:59:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.130 17:59:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.130 17:59:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.130 17:59:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.130 17:59:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:53.130 17:59:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.130 17:59:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.130 17:59:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.130 17:59:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:53.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:22:53.130 00:22:53.130 --- 10.0.0.2 ping statistics --- 00:22:53.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.130 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:22:53.130 17:59:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:22:53.130 00:22:53.130 --- 10.0.0.1 ping statistics --- 00:22:53.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.130 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:22:53.130 17:59:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.130 17:59:57 -- nvmf/common.sh@410 -- # return 0 00:22:53.130 17:59:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:53.130 17:59:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.130 17:59:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:53.130 17:59:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:53.130 17:59:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.130 17:59:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:53.130 17:59:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:53.130 17:59:57 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:53.130 17:59:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:53.130 17:59:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:53.130 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:53.130 17:59:57 -- nvmf/common.sh@469 -- # nvmfpid=1741115 00:22:53.130 17:59:57 -- nvmf/common.sh@470 -- # waitforlisten 1741115 00:22:53.130 17:59:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:53.130 17:59:57 -- common/autotest_common.sh@819 -- # '[' -z 1741115 ']' 00:22:53.130 17:59:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.130 17:59:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:53.130 17:59:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.130 17:59:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:53.130 17:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:53.130 [2024-07-22 17:59:57.182087] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:53.130 [2024-07-22 17:59:57.182149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.130 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.130 [2024-07-22 17:59:57.274830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:53.130 [2024-07-22 17:59:57.367398] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:53.130 [2024-07-22 17:59:57.367563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.130 [2024-07-22 17:59:57.367572] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.130 [2024-07-22 17:59:57.367579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.130 [2024-07-22 17:59:57.367721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.130 [2024-07-22 17:59:57.367848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.130 [2024-07-22 17:59:57.367978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.130 [2024-07-22 17:59:57.367981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.071 17:59:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:54.071 17:59:58 -- common/autotest_common.sh@852 -- # return 0 00:22:54.071 17:59:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:54.071 17:59:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.071 17:59:58 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 [2024-07-22 17:59:58.084527] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:54.071 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.071 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 Malloc1 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 [2024-07-22 17:59:58.132782] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.071 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 Malloc2 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.071 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 Malloc3 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.071 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:54.071 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.071 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.071 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.072 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 Malloc4 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.072 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 Malloc5 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.072 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 Malloc6 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.072 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.072 Malloc7 00:22:54.072 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.072 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:54.072 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.072 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.333 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 Malloc8 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.333 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 Malloc9 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.333 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 Malloc10 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.333 17:59:58 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 Malloc11 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:54.333 17:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.333 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:54.333 17:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.333 17:59:58 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:54.333 17:59:58 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.333 17:59:58 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:55.717 17:59:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:55.718 17:59:59 -- common/autotest_common.sh@1177 -- # local i=0 00:22:55.718 17:59:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:55.718 17:59:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:55.718 17:59:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:58.262 18:00:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:58.262 18:00:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:58.262 18:00:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:58.262 18:00:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:58.262 18:00:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:58.262 18:00:01 -- common/autotest_common.sh@1187 -- # return 0 00:22:58.262 18:00:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:58.262 18:00:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:59.645 18:00:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:59.645 18:00:03 -- common/autotest_common.sh@1177 -- # local i=0 00:22:59.645 18:00:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:59.645 18:00:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:59.645 18:00:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:01.549 18:00:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:01.549 18:00:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:01.549 18:00:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:23:01.549 18:00:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:01.549 18:00:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:01.549 18:00:05 -- common/autotest_common.sh@1187 -- # return 0 00:23:01.549 18:00:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.549 18:00:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:23:02.929 18:00:07 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:23:02.929 18:00:07 -- common/autotest_common.sh@1177 -- # local i=0 00:23:02.929 18:00:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:02.929 18:00:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:02.929 18:00:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:05.466 18:00:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:05.466 18:00:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:05.466 18:00:09 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:23:05.466 18:00:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:05.466 18:00:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:05.466 18:00:09 -- common/autotest_common.sh@1187 -- # return 0 00:23:05.466 18:00:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.467 18:00:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:23:06.850 18:00:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:23:06.850 18:00:10 -- common/autotest_common.sh@1177 -- # local i=0 00:23:06.850 18:00:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:06.850 18:00:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:06.850 18:00:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:08.757 18:00:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:08.757 18:00:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:08.757 18:00:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:23:08.757 18:00:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:08.757 18:00:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:08.757 18:00:12 -- common/autotest_common.sh@1187 -- # return 0 00:23:08.757 18:00:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:08.757 18:00:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:23:10.670 18:00:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:23:10.670 18:00:14 -- common/autotest_common.sh@1177 -- # local i=0 00:23:10.670 18:00:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:10.670 18:00:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:10.670 18:00:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:12.579 18:00:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:12.579 18:00:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:12.579 18:00:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:23:12.579 18:00:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:12.579 18:00:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:12.579 18:00:16 -- common/autotest_common.sh@1187 -- # return 0 00:23:12.579 18:00:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:12.579 18:00:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:23:13.963 18:00:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:23:13.963 18:00:18 -- common/autotest_common.sh@1177 -- # local i=0 00:23:13.963 18:00:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:13.963 18:00:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:13.963 18:00:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:15.873 18:00:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:15.873 18:00:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:15.873 18:00:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:23:16.136 18:00:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:16.136 18:00:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.136 18:00:20 -- common/autotest_common.sh@1187 -- # return 0 00:23:16.136 18:00:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:16.136 18:00:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:23:18.044 18:00:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:23:18.044 18:00:21 -- common/autotest_common.sh@1177 -- # local i=0 00:23:18.044 18:00:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:18.044 18:00:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:18.044 18:00:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:19.962 18:00:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:19.962 18:00:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:19.962 18:00:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:23:19.962 18:00:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:19.962 18:00:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:19.962 18:00:23 -- common/autotest_common.sh@1187 -- # return 0 00:23:19.962 18:00:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:19.962 18:00:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:23:21.388 18:00:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:23:21.388 18:00:25 -- common/autotest_common.sh@1177 -- # local i=0 00:23:21.388 18:00:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:21.388 18:00:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:21.388 18:00:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:23.933 18:00:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:23.934 18:00:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:23.934 18:00:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:23:23.934 18:00:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:23.934 18:00:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:23.934 18:00:27 -- common/autotest_common.sh@1187 -- # return 0 00:23:23.934 18:00:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:23.934 18:00:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:23:25.315 18:00:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:23:25.315 18:00:29 -- common/autotest_common.sh@1177 -- # local i=0 00:23:25.315 18:00:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:25.315 18:00:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:25.315 18:00:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:27.223 18:00:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:27.223 18:00:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:27.223 18:00:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:23:27.223 18:00:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:27.223 18:00:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:27.223 18:00:31 -- common/autotest_common.sh@1187 -- # return 0 00:23:27.223 18:00:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:27.223 18:00:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:23:29.141 18:00:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:23:29.141 18:00:33 -- common/autotest_common.sh@1177 -- # local i=0 00:23:29.141 18:00:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:29.141 18:00:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:29.141 18:00:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:31.681 18:00:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:31.681 18:00:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:31.681 18:00:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:23:31.681 18:00:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:31.681 18:00:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:31.681 18:00:35 -- common/autotest_common.sh@1187 -- # return 0 00:23:31.681 18:00:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:31.681 18:00:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:23:33.061 18:00:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:23:33.061 18:00:37 -- common/autotest_common.sh@1177 -- # local i=0 00:23:33.061 18:00:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:33.061 18:00:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:33.061 18:00:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:34.978 18:00:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:34.978 18:00:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:34.978 18:00:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:23:34.978 18:00:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:34.978 18:00:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:34.978 18:00:39 -- common/autotest_common.sh@1187 -- # return 0 00:23:34.978 18:00:39 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:23:34.978 [global] 00:23:34.978 thread=1 00:23:34.978 invalidate=1 00:23:34.978 rw=read 00:23:34.978 time_based=1 00:23:34.978 runtime=10 00:23:34.978 ioengine=libaio 00:23:34.978 direct=1 00:23:34.978 bs=262144 00:23:34.978 iodepth=64 00:23:34.978 norandommap=1 00:23:34.978 numjobs=1 00:23:34.978 00:23:34.978 [job0] 00:23:34.978 filename=/dev/nvme0n1 00:23:34.978 [job1] 00:23:34.978 filename=/dev/nvme10n1 00:23:34.978 [job2] 00:23:34.978 filename=/dev/nvme1n1 00:23:34.978 [job3] 00:23:34.978 filename=/dev/nvme2n1 00:23:34.978 [job4] 00:23:34.978 filename=/dev/nvme3n1 00:23:34.978 [job5] 00:23:34.978 filename=/dev/nvme4n1 00:23:34.978 [job6] 00:23:34.978 filename=/dev/nvme5n1 00:23:34.978 [job7] 00:23:34.978 filename=/dev/nvme6n1 00:23:34.978 [job8] 00:23:34.978 filename=/dev/nvme7n1 00:23:34.978 [job9] 00:23:34.978 filename=/dev/nvme8n1 00:23:34.978 [job10] 00:23:34.978 filename=/dev/nvme9n1 00:23:35.238 Could not set queue depth (nvme0n1) 00:23:35.238 Could not set queue depth (nvme10n1) 00:23:35.238 Could not set queue depth (nvme1n1) 00:23:35.238 Could not set queue depth (nvme2n1) 00:23:35.238 Could not set queue depth (nvme3n1) 00:23:35.238 Could not set queue depth (nvme4n1) 00:23:35.238 Could not set queue depth (nvme5n1) 00:23:35.238 Could not set queue depth (nvme6n1) 00:23:35.238 Could not set queue depth (nvme7n1) 00:23:35.238 Could not set queue depth (nvme8n1) 00:23:35.238 Could not set queue depth (nvme9n1) 00:23:35.497 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.497 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.497 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.497 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:35.498 fio-3.35 00:23:35.498 Starting 11 threads 00:23:47.744 00:23:47.744 job0: (groupid=0, jobs=1): err= 0: pid=1749167: Mon Jul 22 18:00:50 2024 00:23:47.744 read: IOPS=1275, BW=319MiB/s (334MB/s)(3204MiB/10047msec) 00:23:47.744 slat (usec): min=7, max=113392, avg=672.23, stdev=2658.55 00:23:47.744 clat (msec): min=3, max=232, avg=49.43, stdev=29.02 00:23:47.744 lat (msec): min=3, max=235, avg=50.10, stdev=29.38 00:23:47.744 clat percentiles (msec): 00:23:47.744 | 1.00th=[ 16], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 33], 00:23:47.744 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 43], 00:23:47.745 | 70.00th=[ 51], 80.00th=[ 59], 90.00th=[ 91], 95.00th=[ 124], 00:23:47.745 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 176], 00:23:47.745 | 99.99th=[ 232] 00:23:47.745 bw ( KiB/s): min=144896, max=507904, per=13.05%, avg=326425.60, stdev=125068.23, samples=20 00:23:47.745 iops : min= 566, max= 1984, avg=1275.10, stdev=488.55, samples=20 00:23:47.745 lat (msec) : 4=0.07%, 10=0.60%, 20=1.03%, 50=67.40%, 100=22.35% 00:23:47.745 lat (msec) : 250=8.55% 00:23:47.745 cpu : usr=0.48%, sys=4.16%, ctx=2905, majf=0, minf=4097 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=12814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job1: (groupid=0, jobs=1): err= 0: pid=1749168: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=864, BW=216MiB/s (227MB/s)(2175MiB/10069msec) 00:23:47.745 slat (usec): min=7, max=126561, avg=917.44, stdev=4168.61 00:23:47.745 clat (usec): min=1313, max=276232, avg=73045.24, stdev=43072.08 00:23:47.745 lat (usec): min=1361, max=276298, avg=73962.68, stdev=43701.97 00:23:47.745 clat percentiles (msec): 00:23:47.745 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 17], 20.00th=[ 42], 00:23:47.745 | 30.00th=[ 54], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 74], 00:23:47.745 | 70.00th=[ 79], 80.00th=[ 94], 90.00th=[ 142], 95.00th=[ 167], 00:23:47.745 | 99.00th=[ 194], 99.50th=[ 207], 99.90th=[ 218], 99.95th=[ 247], 00:23:47.745 | 99.99th=[ 275] 00:23:47.745 bw ( KiB/s): min=107520, max=336384, per=8.84%, avg=221107.20, stdev=60561.36, samples=20 00:23:47.745 iops : min= 420, max= 1314, avg=863.70, stdev=236.57, samples=20 00:23:47.745 lat (msec) : 2=0.43%, 4=1.01%, 10=2.78%, 20=7.82%, 50=14.41% 00:23:47.745 lat (msec) : 100=55.17%, 250=18.37%, 500=0.01% 00:23:47.745 cpu : usr=0.35%, sys=2.93%, ctx=2252, majf=0, minf=3598 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=8700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job2: (groupid=0, jobs=1): err= 0: pid=1749169: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=1450, BW=363MiB/s (380MB/s)(3648MiB/10058msec) 00:23:47.745 slat (usec): min=7, max=44267, avg=682.72, stdev=1720.03 00:23:47.745 clat (msec): min=21, max=133, avg=43.40, stdev=18.93 00:23:47.745 lat (msec): min=21, max=133, avg=44.08, stdev=19.21 00:23:47.745 clat percentiles (msec): 00:23:47.745 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 28], 00:23:47.745 | 30.00th=[ 29], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 42], 00:23:47.745 | 70.00th=[ 53], 80.00th=[ 68], 90.00th=[ 73], 95.00th=[ 77], 00:23:47.745 | 99.00th=[ 84], 99.50th=[ 94], 99.90th=[ 115], 99.95th=[ 118], 00:23:47.745 | 99.99th=[ 134] 00:23:47.745 bw ( KiB/s): min=211212, max=566784, per=14.86%, avg=371827.80, stdev=148443.50, samples=20 00:23:47.745 iops : min= 825, max= 2214, avg=1452.45, stdev=579.86, samples=20 00:23:47.745 lat (msec) : 50=66.63%, 100=32.96%, 250=0.41% 00:23:47.745 cpu : usr=0.53%, sys=3.96%, ctx=3009, majf=0, minf=4097 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=14590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job3: (groupid=0, jobs=1): err= 0: pid=1749172: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=514, BW=129MiB/s (135MB/s)(1299MiB/10089msec) 00:23:47.745 slat (usec): min=9, max=79339, avg=1922.76, stdev=5192.06 00:23:47.745 clat (msec): min=16, max=238, avg=122.16, stdev=27.61 00:23:47.745 lat (msec): min=16, max=238, avg=124.09, stdev=28.23 00:23:47.745 clat percentiles (msec): 00:23:47.745 | 1.00th=[ 81], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 99], 00:23:47.745 | 30.00th=[ 101], 40.00th=[ 105], 50.00th=[ 118], 60.00th=[ 130], 00:23:47.745 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 171], 00:23:47.745 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 222], 99.95th=[ 236], 00:23:47.745 | 99.99th=[ 239] 00:23:47.745 bw ( KiB/s): min=97280, max=168960, per=5.25%, avg=131379.20, stdev=26848.68, samples=20 00:23:47.745 iops : min= 380, max= 660, avg=513.20, stdev=104.88, samples=20 00:23:47.745 lat (msec) : 20=0.15%, 50=0.37%, 100=28.49%, 250=70.99% 00:23:47.745 cpu : usr=0.31%, sys=1.93%, ctx=1221, majf=0, minf=4097 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=5195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job4: (groupid=0, jobs=1): err= 0: pid=1749175: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=937, BW=234MiB/s (246MB/s)(2347MiB/10019msec) 00:23:47.745 slat (usec): min=7, max=92256, avg=877.60, stdev=3637.22 00:23:47.745 clat (msec): min=3, max=227, avg=67.31, stdev=37.39 00:23:47.745 lat (msec): min=3, max=269, avg=68.18, stdev=37.85 00:23:47.745 clat percentiles (msec): 00:23:47.745 | 1.00th=[ 11], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 43], 00:23:47.745 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 63], 00:23:47.745 | 70.00th=[ 68], 80.00th=[ 78], 90.00th=[ 138], 95.00th=[ 148], 00:23:47.745 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 201], 99.95th=[ 220], 00:23:47.745 | 99.99th=[ 228] 00:23:47.745 bw ( KiB/s): min=98304, max=404480, per=9.54%, avg=238745.60, stdev=80935.41, samples=20 00:23:47.745 iops : min= 384, max= 1580, avg=932.60, stdev=316.15, samples=20 00:23:47.745 lat (msec) : 4=0.02%, 10=0.88%, 20=2.79%, 50=29.64%, 100=50.36% 00:23:47.745 lat (msec) : 250=16.31% 00:23:47.745 cpu : usr=0.39%, sys=3.12%, ctx=2242, majf=0, minf=4097 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=9389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job5: (groupid=0, jobs=1): err= 0: pid=1749176: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=788, BW=197MiB/s (207MB/s)(1987MiB/10083msec) 00:23:47.745 slat (usec): min=7, max=135140, avg=1073.38, stdev=5158.21 00:23:47.745 clat (usec): min=1534, max=284602, avg=80050.95, stdev=48607.56 00:23:47.745 lat (usec): min=1584, max=308168, avg=81124.33, stdev=49392.10 00:23:47.745 clat percentiles (msec): 00:23:47.745 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 22], 20.00th=[ 33], 00:23:47.745 | 30.00th=[ 38], 40.00th=[ 52], 50.00th=[ 92], 60.00th=[ 99], 00:23:47.745 | 70.00th=[ 104], 80.00th=[ 126], 90.00th=[ 144], 95.00th=[ 167], 00:23:47.745 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 232], 99.95th=[ 271], 00:23:47.745 | 99.99th=[ 284] 00:23:47.745 bw ( KiB/s): min=97280, max=472064, per=8.07%, avg=201830.40, stdev=87871.18, samples=20 00:23:47.745 iops : min= 380, max= 1844, avg=788.40, stdev=343.25, samples=20 00:23:47.745 lat (msec) : 2=0.03%, 4=0.54%, 10=3.71%, 20=4.87%, 50=30.35% 00:23:47.745 lat (msec) : 100=24.91%, 250=35.51%, 500=0.09% 00:23:47.745 cpu : usr=0.31%, sys=2.83%, ctx=2025, majf=0, minf=4097 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=7948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job6: (groupid=0, jobs=1): err= 0: pid=1749177: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=1017, BW=254MiB/s (267MB/s)(2563MiB/10079msec) 00:23:47.745 slat (usec): min=7, max=115409, avg=822.32, stdev=3285.25 00:23:47.745 clat (usec): min=1700, max=288479, avg=62007.90, stdev=36953.22 00:23:47.745 lat (usec): min=1750, max=288507, avg=62830.23, stdev=37475.39 00:23:47.745 clat percentiles (msec): 00:23:47.745 | 1.00th=[ 11], 5.00th=[ 21], 10.00th=[ 26], 20.00th=[ 28], 00:23:47.745 | 30.00th=[ 40], 40.00th=[ 46], 50.00th=[ 53], 60.00th=[ 61], 00:23:47.745 | 70.00th=[ 72], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 133], 00:23:47.745 | 99.00th=[ 176], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 190], 00:23:47.745 | 99.99th=[ 245] 00:23:47.745 bw ( KiB/s): min=101888, max=537088, per=10.43%, avg=260885.75, stdev=115694.73, samples=20 00:23:47.745 iops : min= 398, max= 2098, avg=1019.05, stdev=451.95, samples=20 00:23:47.745 lat (msec) : 2=0.02%, 4=0.04%, 10=0.93%, 20=3.78%, 50=42.12% 00:23:47.745 lat (msec) : 100=35.38%, 250=17.72%, 500=0.01% 00:23:47.745 cpu : usr=0.22%, sys=3.31%, ctx=2391, majf=0, minf=4097 00:23:47.745 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:23:47.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.745 issued rwts: total=10253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.745 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.745 job7: (groupid=0, jobs=1): err= 0: pid=1749178: Mon Jul 22 18:00:50 2024 00:23:47.745 read: IOPS=819, BW=205MiB/s (215MB/s)(2063MiB/10066msec) 00:23:47.745 slat (usec): min=6, max=142221, avg=1035.00, stdev=4842.66 00:23:47.746 clat (msec): min=2, max=289, avg=76.91, stdev=45.77 00:23:47.746 lat (msec): min=2, max=289, avg=77.94, stdev=46.58 00:23:47.746 clat percentiles (msec): 00:23:47.746 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 30], 20.00th=[ 43], 00:23:47.746 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 65], 60.00th=[ 73], 00:23:47.746 | 70.00th=[ 87], 80.00th=[ 125], 90.00th=[ 153], 95.00th=[ 167], 00:23:47.746 | 99.00th=[ 194], 99.50th=[ 197], 99.90th=[ 243], 99.95th=[ 249], 00:23:47.746 | 99.99th=[ 288] 00:23:47.746 bw ( KiB/s): min=95232, max=347648, per=8.38%, avg=209612.80, stdev=89420.26, samples=20 00:23:47.746 iops : min= 372, max= 1358, avg=818.80, stdev=349.30, samples=20 00:23:47.746 lat (msec) : 4=0.21%, 10=1.68%, 20=3.62%, 50=28.52%, 100=40.38% 00:23:47.746 lat (msec) : 250=25.54%, 500=0.05% 00:23:47.746 cpu : usr=0.22%, sys=2.55%, ctx=1948, majf=0, minf=4097 00:23:47.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:47.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.746 issued rwts: total=8251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.746 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.746 job8: (groupid=0, jobs=1): err= 0: pid=1749179: Mon Jul 22 18:00:50 2024 00:23:47.746 read: IOPS=524, BW=131MiB/s (138MB/s)(1323MiB/10087msec) 00:23:47.746 slat (usec): min=8, max=82245, avg=1863.65, stdev=5171.47 00:23:47.746 clat (msec): min=5, max=230, avg=119.98, stdev=30.68 00:23:47.746 lat (msec): min=5, max=238, avg=121.85, stdev=31.32 00:23:47.746 clat percentiles (msec): 00:23:47.746 | 1.00th=[ 28], 5.00th=[ 79], 10.00th=[ 93], 20.00th=[ 99], 00:23:47.746 | 30.00th=[ 102], 40.00th=[ 106], 50.00th=[ 115], 60.00th=[ 128], 00:23:47.746 | 70.00th=[ 138], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 174], 00:23:47.746 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 220], 99.95th=[ 228], 00:23:47.746 | 99.99th=[ 230] 00:23:47.746 bw ( KiB/s): min=93184, max=190976, per=5.35%, avg=133836.80, stdev=28057.36, samples=20 00:23:47.746 iops : min= 364, max= 746, avg=522.80, stdev=109.60, samples=20 00:23:47.746 lat (msec) : 10=0.21%, 20=0.47%, 50=1.30%, 100=24.29%, 250=73.73% 00:23:47.746 cpu : usr=0.30%, sys=1.93%, ctx=1208, majf=0, minf=4097 00:23:47.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:47.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.746 issued rwts: total=5291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.746 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.746 job9: (groupid=0, jobs=1): err= 0: pid=1749180: Mon Jul 22 18:00:50 2024 00:23:47.746 read: IOPS=936, BW=234MiB/s (245MB/s)(2351MiB/10044msec) 00:23:47.746 slat (usec): min=7, max=71902, avg=879.91, stdev=2673.01 00:23:47.746 clat (msec): min=2, max=258, avg=67.39, stdev=25.83 00:23:47.746 lat (msec): min=2, max=258, avg=68.27, stdev=26.15 00:23:47.746 clat percentiles (msec): 00:23:47.746 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 54], 00:23:47.746 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 71], 00:23:47.746 | 70.00th=[ 74], 80.00th=[ 79], 90.00th=[ 89], 95.00th=[ 107], 00:23:47.746 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 211], 00:23:47.746 | 99.99th=[ 259] 00:23:47.746 bw ( KiB/s): min=126976, max=358912, per=9.56%, avg=239156.40, stdev=52286.73, samples=20 00:23:47.746 iops : min= 496, max= 1402, avg=934.20, stdev=204.24, samples=20 00:23:47.746 lat (msec) : 4=0.01%, 10=0.32%, 20=2.01%, 50=14.78%, 100=76.58% 00:23:47.746 lat (msec) : 250=6.27%, 500=0.02% 00:23:47.746 cpu : usr=0.43%, sys=3.14%, ctx=2284, majf=0, minf=4097 00:23:47.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:23:47.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.746 issued rwts: total=9404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.746 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.746 job10: (groupid=0, jobs=1): err= 0: pid=1749181: Mon Jul 22 18:00:50 2024 00:23:47.746 read: IOPS=669, BW=167MiB/s (176MB/s)(1689MiB/10087msec) 00:23:47.746 slat (usec): min=8, max=57697, avg=1332.32, stdev=4035.50 00:23:47.746 clat (msec): min=5, max=228, avg=94.09, stdev=44.84 00:23:47.746 lat (msec): min=5, max=228, avg=95.43, stdev=45.46 00:23:47.746 clat percentiles (msec): 00:23:47.746 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 51], 00:23:47.746 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 80], 60.00th=[ 112], 00:23:47.746 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 155], 95.00th=[ 169], 00:23:47.746 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 197], 99.95th=[ 215], 00:23:47.746 | 99.99th=[ 230] 00:23:47.746 bw ( KiB/s): min=95744, max=413696, per=6.85%, avg=171315.20, stdev=88450.97, samples=20 00:23:47.746 iops : min= 374, max= 1616, avg=669.20, stdev=345.51, samples=20 00:23:47.746 lat (msec) : 10=0.21%, 20=0.33%, 50=19.35%, 100=37.42%, 250=42.69% 00:23:47.746 cpu : usr=0.27%, sys=2.52%, ctx=1650, majf=0, minf=4097 00:23:47.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:47.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:47.746 issued rwts: total=6755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.746 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.746 00:23:47.746 Run status group 0 (all jobs): 00:23:47.746 READ: bw=2443MiB/s (2562MB/s), 129MiB/s-363MiB/s (135MB/s-380MB/s), io=24.1GiB (25.8GB), run=10019-10089msec 00:23:47.746 00:23:47.746 Disk stats (read/write): 00:23:47.746 nvme0n1: ios=25254/0, merge=0/0, ticks=1229971/0, in_queue=1229971, util=96.88% 00:23:47.746 nvme10n1: ios=17096/0, merge=0/0, ticks=1227938/0, in_queue=1227938, util=97.10% 00:23:47.746 nvme1n1: ios=28875/0, merge=0/0, ticks=1223632/0, in_queue=1223632, util=97.33% 00:23:47.746 nvme2n1: ios=10139/0, merge=0/0, ticks=1214442/0, in_queue=1214442, util=97.57% 00:23:47.746 nvme3n1: ios=18433/0, merge=0/0, ticks=1230745/0, in_queue=1230745, util=97.71% 00:23:47.746 nvme4n1: ios=15631/0, merge=0/0, ticks=1223862/0, in_queue=1223862, util=97.99% 00:23:47.746 nvme5n1: ios=20263/0, merge=0/0, ticks=1224806/0, in_queue=1224806, util=98.18% 00:23:47.746 nvme6n1: ios=16215/0, merge=0/0, ticks=1226152/0, in_queue=1226152, util=98.43% 00:23:47.746 nvme7n1: ios=10332/0, merge=0/0, ticks=1215194/0, in_queue=1215194, util=98.81% 00:23:47.746 nvme8n1: ios=18467/0, merge=0/0, ticks=1227737/0, in_queue=1227737, util=99.04% 00:23:47.746 nvme9n1: ios=13012/0, merge=0/0, ticks=1226040/0, in_queue=1226040, util=99.20% 00:23:47.746 18:00:50 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:47.746 [global] 00:23:47.746 thread=1 00:23:47.746 invalidate=1 00:23:47.746 rw=randwrite 00:23:47.746 time_based=1 00:23:47.746 runtime=10 00:23:47.746 ioengine=libaio 00:23:47.746 direct=1 00:23:47.746 bs=262144 00:23:47.746 iodepth=64 00:23:47.746 norandommap=1 00:23:47.746 numjobs=1 00:23:47.746 00:23:47.746 [job0] 00:23:47.746 filename=/dev/nvme0n1 00:23:47.746 [job1] 00:23:47.746 filename=/dev/nvme10n1 00:23:47.746 [job2] 00:23:47.746 filename=/dev/nvme1n1 00:23:47.746 [job3] 00:23:47.746 filename=/dev/nvme2n1 00:23:47.746 [job4] 00:23:47.746 filename=/dev/nvme3n1 00:23:47.746 [job5] 00:23:47.746 filename=/dev/nvme4n1 00:23:47.746 [job6] 00:23:47.746 filename=/dev/nvme5n1 00:23:47.746 [job7] 00:23:47.746 filename=/dev/nvme6n1 00:23:47.746 [job8] 00:23:47.746 filename=/dev/nvme7n1 00:23:47.746 [job9] 00:23:47.746 filename=/dev/nvme8n1 00:23:47.746 [job10] 00:23:47.746 filename=/dev/nvme9n1 00:23:47.746 Could not set queue depth (nvme0n1) 00:23:47.746 Could not set queue depth (nvme10n1) 00:23:47.746 Could not set queue depth (nvme1n1) 00:23:47.746 Could not set queue depth (nvme2n1) 00:23:47.746 Could not set queue depth (nvme3n1) 00:23:47.746 Could not set queue depth (nvme4n1) 00:23:47.746 Could not set queue depth (nvme5n1) 00:23:47.746 Could not set queue depth (nvme6n1) 00:23:47.746 Could not set queue depth (nvme7n1) 00:23:47.746 Could not set queue depth (nvme8n1) 00:23:47.746 Could not set queue depth (nvme9n1) 00:23:47.746 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:47.746 fio-3.35 00:23:47.746 Starting 11 threads 00:23:57.755 00:23:57.755 job0: (groupid=0, jobs=1): err= 0: pid=1750512: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=766, BW=192MiB/s (201MB/s)(1943MiB/10139msec); 0 zone resets 00:23:57.755 slat (usec): min=23, max=27103, avg=1159.00, stdev=2351.37 00:23:57.755 clat (msec): min=4, max=275, avg=82.28, stdev=31.22 00:23:57.755 lat (msec): min=4, max=275, avg=83.43, stdev=31.67 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 16], 5.00th=[ 37], 10.00th=[ 50], 20.00th=[ 55], 00:23:57.755 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 78], 60.00th=[ 80], 00:23:57.755 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 128], 95.00th=[ 134], 00:23:57.755 | 99.00th=[ 140], 99.50th=[ 186], 99.90th=[ 259], 99.95th=[ 268], 00:23:57.755 | 99.99th=[ 275] 00:23:57.755 bw ( KiB/s): min=124416, max=322048, per=10.50%, avg=197367.55, stdev=55585.15, samples=20 00:23:57.755 iops : min= 486, max= 1258, avg=770.95, stdev=217.14, samples=20 00:23:57.755 lat (msec) : 10=0.13%, 20=2.03%, 50=9.32%, 100=59.56%, 250=28.78% 00:23:57.755 lat (msec) : 500=0.18% 00:23:57.755 cpu : usr=1.56%, sys=2.67%, ctx=2804, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,7772,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.755 job1: (groupid=0, jobs=1): err= 0: pid=1750535: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=604, BW=151MiB/s (158MB/s)(1533MiB/10142msec); 0 zone resets 00:23:57.755 slat (usec): min=26, max=38439, avg=1559.98, stdev=2994.76 00:23:57.755 clat (msec): min=3, max=270, avg=104.22, stdev=26.55 00:23:57.755 lat (msec): min=4, max=270, avg=105.78, stdev=26.85 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 31], 5.00th=[ 67], 10.00th=[ 70], 20.00th=[ 77], 00:23:57.755 | 30.00th=[ 96], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 113], 00:23:57.755 | 70.00th=[ 120], 80.00th=[ 124], 90.00th=[ 133], 95.00th=[ 140], 00:23:57.755 | 99.00th=[ 167], 99.50th=[ 207], 99.90th=[ 255], 99.95th=[ 262], 00:23:57.755 | 99.99th=[ 271] 00:23:57.755 bw ( KiB/s): min=122880, max=242176, per=8.26%, avg=155366.40, stdev=32410.03, samples=20 00:23:57.755 iops : min= 480, max= 946, avg=606.90, stdev=126.60, samples=20 00:23:57.755 lat (msec) : 4=0.02%, 10=0.15%, 20=0.38%, 50=1.76%, 100=35.89% 00:23:57.755 lat (msec) : 250=61.64%, 500=0.16% 00:23:57.755 cpu : usr=1.47%, sys=2.12%, ctx=1849, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,6132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.755 job2: (groupid=0, jobs=1): err= 0: pid=1750539: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=642, BW=161MiB/s (168MB/s)(1616MiB/10057msec); 0 zone resets 00:23:57.755 slat (usec): min=18, max=71436, avg=1441.94, stdev=2802.80 00:23:57.755 clat (msec): min=17, max=180, avg=98.10, stdev=21.58 00:23:57.755 lat (msec): min=17, max=180, avg=99.55, stdev=21.86 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 35], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 78], 00:23:57.755 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 101], 60.00th=[ 109], 00:23:57.755 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 122], 95.00th=[ 125], 00:23:57.755 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 176], 99.95th=[ 178], 00:23:57.755 | 99.99th=[ 180] 00:23:57.755 bw ( KiB/s): min=114688, max=230400, per=8.72%, avg=163865.60, stdev=33237.68, samples=20 00:23:57.755 iops : min= 448, max= 900, avg=640.10, stdev=129.83, samples=20 00:23:57.755 lat (msec) : 20=0.06%, 50=1.66%, 100=47.14%, 250=51.14% 00:23:57.755 cpu : usr=1.62%, sys=2.10%, ctx=2007, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,6464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.755 job3: (groupid=0, jobs=1): err= 0: pid=1750540: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=617, BW=154MiB/s (162MB/s)(1553MiB/10057msec); 0 zone resets 00:23:57.755 slat (usec): min=24, max=67496, avg=1604.82, stdev=2939.15 00:23:57.755 clat (msec): min=54, max=163, avg=101.96, stdev=19.79 00:23:57.755 lat (msec): min=58, max=163, avg=103.57, stdev=19.92 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 81], 00:23:57.755 | 30.00th=[ 86], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 112], 00:23:57.755 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 127], 95.00th=[ 131], 00:23:57.755 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:23:57.755 | 99.99th=[ 163] 00:23:57.755 bw ( KiB/s): min=115200, max=211456, per=8.37%, avg=157414.40, stdev=29318.42, samples=20 00:23:57.755 iops : min= 450, max= 826, avg=614.90, stdev=114.53, samples=20 00:23:57.755 lat (msec) : 100=44.74%, 250=55.26% 00:23:57.755 cpu : usr=1.55%, sys=1.97%, ctx=1596, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,6212,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.755 job4: (groupid=0, jobs=1): err= 0: pid=1750541: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=566, BW=142MiB/s (149MB/s)(1436MiB/10137msec); 0 zone resets 00:23:57.755 slat (usec): min=25, max=21211, avg=1622.13, stdev=3002.50 00:23:57.755 clat (msec): min=14, max=236, avg=111.28, stdev=22.60 00:23:57.755 lat (msec): min=14, max=236, avg=112.90, stdev=22.78 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 34], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 99], 00:23:57.755 | 30.00th=[ 107], 40.00th=[ 114], 50.00th=[ 120], 60.00th=[ 121], 00:23:57.755 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 136], 00:23:57.755 | 99.00th=[ 153], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 234], 00:23:57.755 | 99.99th=[ 236] 00:23:57.755 bw ( KiB/s): min=122880, max=207360, per=7.74%, avg=145448.45, stdev=21646.02, samples=20 00:23:57.755 iops : min= 480, max= 810, avg=568.15, stdev=84.55, samples=20 00:23:57.755 lat (msec) : 20=0.10%, 50=1.76%, 100=20.51%, 250=77.63% 00:23:57.755 cpu : usr=1.33%, sys=1.87%, ctx=1877, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,5744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.755 job5: (groupid=0, jobs=1): err= 0: pid=1750542: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=761, BW=190MiB/s (200MB/s)(1929MiB/10136msec); 0 zone resets 00:23:57.755 slat (usec): min=18, max=83690, avg=1204.54, stdev=2784.26 00:23:57.755 clat (msec): min=2, max=278, avg=82.61, stdev=36.00 00:23:57.755 lat (msec): min=2, max=278, avg=83.81, stdev=36.46 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 53], 00:23:57.755 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 71], 60.00th=[ 95], 00:23:57.755 | 70.00th=[ 106], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 138], 00:23:57.755 | 99.00th=[ 180], 99.50th=[ 207], 99.90th=[ 264], 99.95th=[ 271], 00:23:57.755 | 99.99th=[ 279] 00:23:57.755 bw ( KiB/s): min=120320, max=306176, per=10.42%, avg=195865.60, stdev=68984.33, samples=20 00:23:57.755 iops : min= 470, max= 1196, avg=765.10, stdev=269.47, samples=20 00:23:57.755 lat (msec) : 4=0.04%, 10=1.02%, 20=1.02%, 50=6.52%, 100=54.93% 00:23:57.755 lat (msec) : 250=36.28%, 500=0.18% 00:23:57.755 cpu : usr=1.66%, sys=2.65%, ctx=2430, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,7714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.755 job6: (groupid=0, jobs=1): err= 0: pid=1750543: Mon Jul 22 18:01:01 2024 00:23:57.755 write: IOPS=627, BW=157MiB/s (165MB/s)(1584MiB/10094msec); 0 zone resets 00:23:57.755 slat (usec): min=23, max=14621, avg=1554.94, stdev=2688.16 00:23:57.755 clat (msec): min=16, max=183, avg=100.35, stdev=12.71 00:23:57.755 lat (msec): min=16, max=183, avg=101.91, stdev=12.65 00:23:57.755 clat percentiles (msec): 00:23:57.755 | 1.00th=[ 67], 5.00th=[ 89], 10.00th=[ 91], 20.00th=[ 94], 00:23:57.755 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 100], 00:23:57.755 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 120], 95.00th=[ 129], 00:23:57.755 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 171], 99.95th=[ 178], 00:23:57.755 | 99.99th=[ 184] 00:23:57.755 bw ( KiB/s): min=126976, max=174592, per=8.54%, avg=160614.40, stdev=12921.87, samples=20 00:23:57.755 iops : min= 496, max= 682, avg=627.40, stdev=50.48, samples=20 00:23:57.755 lat (msec) : 20=0.11%, 50=0.36%, 100=61.48%, 250=38.05% 00:23:57.755 cpu : usr=1.62%, sys=2.01%, ctx=1696, majf=0, minf=1 00:23:57.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:57.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.755 issued rwts: total=0,6337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.756 job7: (groupid=0, jobs=1): err= 0: pid=1750544: Mon Jul 22 18:01:01 2024 00:23:57.756 write: IOPS=870, BW=218MiB/s (228MB/s)(2186MiB/10048msec); 0 zone resets 00:23:57.756 slat (usec): min=24, max=49039, avg=1068.81, stdev=2169.23 00:23:57.756 clat (usec): min=1756, max=159680, avg=72427.67, stdev=25543.27 00:23:57.756 lat (usec): min=1820, max=159724, avg=73496.49, stdev=25905.10 00:23:57.756 clat percentiles (msec): 00:23:57.756 | 1.00th=[ 12], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 48], 00:23:57.756 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 77], 00:23:57.756 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 127], 00:23:57.756 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:23:57.756 | 99.99th=[ 161] 00:23:57.756 bw ( KiB/s): min=118509, max=342016, per=11.82%, avg=222271.05, stdev=57292.24, samples=20 00:23:57.756 iops : min= 462, max= 1336, avg=868.20, stdev=223.89, samples=20 00:23:57.756 lat (msec) : 2=0.02%, 4=0.15%, 10=0.59%, 20=1.22%, 50=22.33% 00:23:57.756 lat (msec) : 100=60.69%, 250=14.99% 00:23:57.756 cpu : usr=2.19%, sys=3.06%, ctx=2735, majf=0, minf=1 00:23:57.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:23:57.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.756 issued rwts: total=0,8745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.756 job8: (groupid=0, jobs=1): err= 0: pid=1750545: Mon Jul 22 18:01:01 2024 00:23:57.756 write: IOPS=640, BW=160MiB/s (168MB/s)(1616MiB/10095msec); 0 zone resets 00:23:57.756 slat (usec): min=24, max=96379, avg=1504.56, stdev=3114.21 00:23:57.756 clat (msec): min=10, max=199, avg=98.40, stdev=17.59 00:23:57.756 lat (msec): min=12, max=199, avg=99.90, stdev=17.67 00:23:57.756 clat percentiles (msec): 00:23:57.756 | 1.00th=[ 40], 5.00th=[ 79], 10.00th=[ 85], 20.00th=[ 89], 00:23:57.756 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 99], 00:23:57.756 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 124], 00:23:57.756 | 99.00th=[ 159], 99.50th=[ 178], 99.90th=[ 194], 99.95th=[ 194], 00:23:57.756 | 99.99th=[ 201] 00:23:57.756 bw ( KiB/s): min=115200, max=190464, per=8.71%, avg=163814.40, stdev=19303.99, samples=20 00:23:57.756 iops : min= 450, max= 744, avg=639.90, stdev=75.41, samples=20 00:23:57.756 lat (msec) : 20=0.25%, 50=1.30%, 100=63.37%, 250=35.08% 00:23:57.756 cpu : usr=1.65%, sys=2.22%, ctx=1812, majf=0, minf=1 00:23:57.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:57.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.756 issued rwts: total=0,6462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.756 job9: (groupid=0, jobs=1): err= 0: pid=1750546: Mon Jul 22 18:01:01 2024 00:23:57.756 write: IOPS=646, BW=162MiB/s (169MB/s)(1638MiB/10139msec); 0 zone resets 00:23:57.756 slat (usec): min=18, max=129807, avg=1389.58, stdev=3608.27 00:23:57.756 clat (usec): min=1539, max=263899, avg=97583.74, stdev=36070.75 00:23:57.756 lat (msec): min=2, max=263, avg=98.97, stdev=36.46 00:23:57.756 clat percentiles (msec): 00:23:57.756 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 58], 00:23:57.756 | 30.00th=[ 73], 40.00th=[ 95], 50.00th=[ 105], 60.00th=[ 117], 00:23:57.756 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 136], 95.00th=[ 142], 00:23:57.756 | 99.00th=[ 186], 99.50th=[ 213], 99.90th=[ 249], 99.95th=[ 257], 00:23:57.756 | 99.99th=[ 264] 00:23:57.756 bw ( KiB/s): min=117760, max=283136, per=8.84%, avg=166118.40, stdev=47988.61, samples=20 00:23:57.756 iops : min= 460, max= 1106, avg=648.90, stdev=187.46, samples=20 00:23:57.756 lat (msec) : 2=0.02%, 4=0.14%, 10=0.49%, 20=1.13%, 50=5.71% 00:23:57.756 lat (msec) : 100=36.31%, 250=56.12%, 500=0.09% 00:23:57.756 cpu : usr=1.35%, sys=2.08%, ctx=2227, majf=0, minf=1 00:23:57.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:23:57.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.756 issued rwts: total=0,6552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.756 job10: (groupid=0, jobs=1): err= 0: pid=1750550: Mon Jul 22 18:01:01 2024 00:23:57.756 write: IOPS=629, BW=157MiB/s (165MB/s)(1587MiB/10091msec); 0 zone resets 00:23:57.756 slat (usec): min=23, max=20421, avg=1570.57, stdev=2699.42 00:23:57.756 clat (msec): min=15, max=185, avg=100.14, stdev=13.45 00:23:57.756 lat (msec): min=15, max=185, avg=101.71, stdev=13.39 00:23:57.756 clat percentiles (msec): 00:23:57.756 | 1.00th=[ 70], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 94], 00:23:57.756 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 100], 00:23:57.756 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 130], 00:23:57.756 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 174], 99.95th=[ 180], 00:23:57.756 | 99.99th=[ 186] 00:23:57.756 bw ( KiB/s): min=122880, max=174592, per=8.56%, avg=160913.40, stdev=13614.92, samples=20 00:23:57.756 iops : min= 480, max= 682, avg=628.55, stdev=53.16, samples=20 00:23:57.756 lat (msec) : 20=0.06%, 50=0.44%, 100=61.69%, 250=37.81% 00:23:57.756 cpu : usr=1.64%, sys=1.93%, ctx=1633, majf=0, minf=1 00:23:57.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:57.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:57.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:57.756 issued rwts: total=0,6348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:57.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:57.756 00:23:57.756 Run status group 0 (all jobs): 00:23:57.756 WRITE: bw=1836MiB/s (1925MB/s), 142MiB/s-218MiB/s (149MB/s-228MB/s), io=18.2GiB (19.5GB), run=10048-10142msec 00:23:57.756 00:23:57.756 Disk stats (read/write): 00:23:57.756 nvme0n1: ios=49/15531, merge=0/0, ticks=1431/1234873, in_queue=1236304, util=100.00% 00:23:57.756 nvme10n1: ios=41/12243, merge=0/0, ticks=1597/1227781, in_queue=1229378, util=100.00% 00:23:57.756 nvme1n1: ios=48/12627, merge=0/0, ticks=135/1204372, in_queue=1204507, util=98.05% 00:23:57.756 nvme2n1: ios=43/12124, merge=0/0, ticks=1273/1200628, in_queue=1201901, util=100.00% 00:23:57.756 nvme3n1: ios=13/11478, merge=0/0, ticks=362/1233424, in_queue=1233786, util=97.76% 00:23:57.756 nvme4n1: ios=52/15419, merge=0/0, ticks=2996/1220435, in_queue=1223431, util=100.00% 00:23:57.756 nvme5n1: ios=0/12389, merge=0/0, ticks=0/1201654, in_queue=1201654, util=98.09% 00:23:57.756 nvme6n1: ios=49/16956, merge=0/0, ticks=1774/1202415, in_queue=1204189, util=100.00% 00:23:57.756 nvme7n1: ios=46/12641, merge=0/0, ticks=2707/1192647, in_queue=1195354, util=100.00% 00:23:57.756 nvme8n1: ios=44/13091, merge=0/0, ticks=2772/1203423, in_queue=1206195, util=100.00% 00:23:57.756 nvme9n1: ios=0/12417, merge=0/0, ticks=0/1201008, in_queue=1201008, util=99.08% 00:23:57.756 18:01:01 -- target/multiconnection.sh@36 -- # sync 00:23:57.756 18:01:01 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:57.756 18:01:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.756 18:01:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:57.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:57.756 18:01:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:57.756 18:01:01 -- common/autotest_common.sh@1198 -- # local i=0 00:23:57.756 18:01:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:57.756 18:01:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:57.756 18:01:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:57.756 18:01:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:57.756 18:01:01 -- common/autotest_common.sh@1210 -- # return 0 00:23:57.756 18:01:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.756 18:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.756 18:01:01 -- common/autotest_common.sh@10 -- # set +x 00:23:57.756 18:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.756 18:01:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:57.756 18:01:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:57.756 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:57.756 18:01:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:57.756 18:01:02 -- common/autotest_common.sh@1198 -- # local i=0 00:23:57.756 18:01:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:57.756 18:01:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:58.018 18:01:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.018 18:01:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:58.018 18:01:02 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.018 18:01:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:58.018 18:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.018 18:01:02 -- common/autotest_common.sh@10 -- # set +x 00:23:58.018 18:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.018 18:01:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.018 18:01:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:58.278 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:58.278 18:01:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:58.278 18:01:02 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.278 18:01:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.278 18:01:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:58.278 18:01:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.278 18:01:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:58.278 18:01:02 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.278 18:01:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:58.278 18:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.278 18:01:02 -- common/autotest_common.sh@10 -- # set +x 00:23:58.278 18:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.278 18:01:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.278 18:01:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:58.537 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:58.537 18:01:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:58.537 18:01:02 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.537 18:01:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.537 18:01:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:58.537 18:01:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.537 18:01:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:58.537 18:01:02 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.537 18:01:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:58.537 18:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.537 18:01:02 -- common/autotest_common.sh@10 -- # set +x 00:23:58.537 18:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.537 18:01:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.537 18:01:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:58.797 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:58.797 18:01:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:58.797 18:01:02 -- common/autotest_common.sh@1198 -- # local i=0 00:23:58.797 18:01:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:58.797 18:01:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:58.797 18:01:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:58.797 18:01:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:58.797 18:01:02 -- common/autotest_common.sh@1210 -- # return 0 00:23:58.797 18:01:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:58.797 18:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.797 18:01:02 -- common/autotest_common.sh@10 -- # set +x 00:23:58.797 18:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.797 18:01:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:58.797 18:01:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:59.056 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:59.056 18:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:59.056 18:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.056 18:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.056 18:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:59.056 18:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.056 18:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:59.056 18:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.056 18:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:59.056 18:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.056 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:59.056 18:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.056 18:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.056 18:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:59.315 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:59.315 18:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:59.315 18:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.315 18:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.315 18:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:59.315 18:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.315 18:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:59.315 18:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.315 18:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:59.315 18:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.315 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:59.315 18:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.315 18:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.315 18:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:59.575 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:59.575 18:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:59.575 18:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.575 18:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.575 18:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:59.575 18:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.575 18:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:59.575 18:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.575 18:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:59.575 18:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.575 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 18:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.575 18:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.575 18:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:59.575 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:59.575 18:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:59.575 18:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.575 18:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.575 18:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:59.575 18:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.575 18:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:59.575 18:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.575 18:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:59.575 18:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.575 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:59.575 18:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.575 18:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.575 18:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:59.835 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:59.835 18:01:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:59.835 18:01:03 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.835 18:01:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.835 18:01:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:59.835 18:01:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.835 18:01:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:59.835 18:01:03 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.835 18:01:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:59.835 18:01:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.835 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:23:59.835 18:01:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.835 18:01:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:59.835 18:01:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:59.835 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:59.835 18:01:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:59.835 18:01:04 -- common/autotest_common.sh@1198 -- # local i=0 00:23:59.835 18:01:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:59.835 18:01:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:59.835 18:01:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:59.835 18:01:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:59.835 18:01:04 -- common/autotest_common.sh@1210 -- # return 0 00:23:59.835 18:01:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:59.835 18:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.835 18:01:04 -- common/autotest_common.sh@10 -- # set +x 00:23:59.835 18:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.835 18:01:04 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:59.835 18:01:04 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:59.835 18:01:04 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:59.835 18:01:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:59.835 18:01:04 -- nvmf/common.sh@116 -- # sync 00:23:59.835 18:01:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:59.835 18:01:04 -- nvmf/common.sh@119 -- # set +e 00:23:59.835 18:01:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:59.835 18:01:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:59.835 rmmod nvme_tcp 00:23:59.835 rmmod nvme_fabrics 00:24:00.094 rmmod nvme_keyring 00:24:00.094 18:01:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:00.094 18:01:04 -- nvmf/common.sh@123 -- # set -e 00:24:00.094 18:01:04 -- nvmf/common.sh@124 -- # return 0 00:24:00.094 18:01:04 -- nvmf/common.sh@477 -- # '[' -n 1741115 ']' 00:24:00.094 18:01:04 -- nvmf/common.sh@478 -- # killprocess 1741115 00:24:00.094 18:01:04 -- common/autotest_common.sh@926 -- # '[' -z 1741115 ']' 00:24:00.094 18:01:04 -- common/autotest_common.sh@930 -- # kill -0 1741115 00:24:00.094 18:01:04 -- common/autotest_common.sh@931 -- # uname 00:24:00.094 18:01:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:00.094 18:01:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1741115 00:24:00.094 18:01:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:00.094 18:01:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:00.094 18:01:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1741115' 00:24:00.094 killing process with pid 1741115 00:24:00.094 18:01:04 -- common/autotest_common.sh@945 -- # kill 1741115 00:24:00.094 18:01:04 -- common/autotest_common.sh@950 -- # wait 1741115 00:24:00.353 18:01:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:00.353 18:01:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:00.353 18:01:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:00.353 18:01:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.353 18:01:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:00.353 18:01:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.353 18:01:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.353 18:01:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.266 18:01:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:02.266 00:24:02.266 real 1m17.488s 00:24:02.266 user 4m40.442s 00:24:02.266 sys 0m22.937s 00:24:02.266 18:01:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.266 18:01:06 -- common/autotest_common.sh@10 -- # set +x 00:24:02.266 ************************************ 00:24:02.266 END TEST nvmf_multiconnection 00:24:02.266 ************************************ 00:24:02.527 18:01:06 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:02.527 18:01:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:02.527 18:01:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.527 18:01:06 -- common/autotest_common.sh@10 -- # set +x 00:24:02.527 ************************************ 00:24:02.527 START TEST nvmf_initiator_timeout 00:24:02.527 ************************************ 00:24:02.527 18:01:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:02.527 * Looking for test storage... 00:24:02.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.527 18:01:06 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.527 18:01:06 -- nvmf/common.sh@7 -- # uname -s 00:24:02.527 18:01:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.527 18:01:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.527 18:01:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.527 18:01:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.527 18:01:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.527 18:01:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.527 18:01:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.527 18:01:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.527 18:01:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.527 18:01:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.527 18:01:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:02.527 18:01:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:24:02.527 18:01:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.527 18:01:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.527 18:01:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.527 18:01:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.527 18:01:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.527 18:01:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.527 18:01:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.527 18:01:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.527 18:01:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.527 18:01:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.527 18:01:06 -- paths/export.sh@5 -- # export PATH 00:24:02.527 18:01:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.527 18:01:06 -- nvmf/common.sh@46 -- # : 0 00:24:02.527 18:01:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:02.527 18:01:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:02.527 18:01:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:02.527 18:01:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.527 18:01:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.527 18:01:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:02.527 18:01:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:02.527 18:01:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:02.527 18:01:06 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.527 18:01:06 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.527 18:01:06 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:02.527 18:01:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:02.527 18:01:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.527 18:01:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:02.527 18:01:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:02.527 18:01:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:02.527 18:01:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.527 18:01:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.527 18:01:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.527 18:01:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:02.527 18:01:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:02.527 18:01:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:02.527 18:01:06 -- common/autotest_common.sh@10 -- # set +x 00:24:10.729 18:01:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:10.729 18:01:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:10.729 18:01:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:10.729 18:01:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:10.729 18:01:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:10.729 18:01:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:10.729 18:01:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:10.729 18:01:14 -- nvmf/common.sh@294 -- # net_devs=() 00:24:10.729 18:01:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:10.729 18:01:14 -- nvmf/common.sh@295 -- # e810=() 00:24:10.729 18:01:14 -- nvmf/common.sh@295 -- # local -ga e810 00:24:10.729 18:01:14 -- nvmf/common.sh@296 -- # x722=() 00:24:10.729 18:01:14 -- nvmf/common.sh@296 -- # local -ga x722 00:24:10.729 18:01:14 -- nvmf/common.sh@297 -- # mlx=() 00:24:10.729 18:01:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:10.729 18:01:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:10.729 18:01:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:10.729 18:01:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:10.729 18:01:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:10.729 18:01:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.729 18:01:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:10.729 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:10.729 18:01:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:10.729 18:01:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:10.729 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:10.729 18:01:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:10.729 18:01:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.729 18:01:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.729 18:01:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.729 18:01:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.729 18:01:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:10.729 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:10.729 18:01:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.729 18:01:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:10.729 18:01:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:10.729 18:01:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:10.729 18:01:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:10.729 18:01:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:10.729 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:10.729 18:01:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:10.729 18:01:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:10.729 18:01:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:10.729 18:01:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:10.729 18:01:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:10.729 18:01:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:10.729 18:01:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:10.729 18:01:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:10.729 18:01:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:10.729 18:01:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:10.729 18:01:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:10.729 18:01:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:10.730 18:01:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:10.730 18:01:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:10.730 18:01:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:10.730 18:01:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:10.730 18:01:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:10.730 18:01:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:10.730 18:01:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:10.730 18:01:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.730 18:01:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:10.730 18:01:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.730 18:01:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.730 18:01:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.730 18:01:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:10.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:24:10.730 00:24:10.730 --- 10.0.0.2 ping statistics --- 00:24:10.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.730 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:24:10.730 18:01:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:24:10.730 00:24:10.730 --- 10.0.0.1 ping statistics --- 00:24:10.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.730 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:24:10.730 18:01:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.730 18:01:14 -- nvmf/common.sh@410 -- # return 0 00:24:10.730 18:01:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:10.730 18:01:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.730 18:01:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:10.730 18:01:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:10.730 18:01:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.730 18:01:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:10.730 18:01:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:10.730 18:01:14 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:24:10.730 18:01:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:10.730 18:01:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:10.730 18:01:14 -- common/autotest_common.sh@10 -- # set +x 00:24:10.730 18:01:14 -- nvmf/common.sh@469 -- # nvmfpid=1757128 00:24:10.730 18:01:14 -- nvmf/common.sh@470 -- # waitforlisten 1757128 00:24:10.730 18:01:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:10.730 18:01:14 -- common/autotest_common.sh@819 -- # '[' -z 1757128 ']' 00:24:10.730 18:01:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.730 18:01:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:10.730 18:01:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.730 18:01:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:10.730 18:01:14 -- common/autotest_common.sh@10 -- # set +x 00:24:10.730 [2024-07-22 18:01:15.002711] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:10.730 [2024-07-22 18:01:15.002774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.991 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.991 [2024-07-22 18:01:15.097339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.991 [2024-07-22 18:01:15.186903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:10.991 [2024-07-22 18:01:15.187067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.991 [2024-07-22 18:01:15.187077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.991 [2024-07-22 18:01:15.187084] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.991 [2024-07-22 18:01:15.187264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.991 [2024-07-22 18:01:15.187396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.991 [2024-07-22 18:01:15.187479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.991 [2024-07-22 18:01:15.187481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.561 18:01:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:11.561 18:01:15 -- common/autotest_common.sh@852 -- # return 0 00:24:11.561 18:01:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:11.561 18:01:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:11.561 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 18:01:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:11.822 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.822 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 Malloc0 00:24:11.822 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:24:11.822 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.822 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 Delay0 00:24:11.822 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.822 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.822 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 [2024-07-22 18:01:15.899146] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.822 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:11.822 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.822 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:24:11.822 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.822 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.822 18:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:11.822 18:01:15 -- common/autotest_common.sh@10 -- # set +x 00:24:11.822 [2024-07-22 18:01:15.936282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.822 18:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:11.822 18:01:15 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:13.204 18:01:17 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:24:13.204 18:01:17 -- common/autotest_common.sh@1177 -- # local i=0 00:24:13.204 18:01:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.204 18:01:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:24:13.204 18:01:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:24:15.748 18:01:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:24:15.748 18:01:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:24:15.748 18:01:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:24:15.748 18:01:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:24:15.748 18:01:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.748 18:01:19 -- common/autotest_common.sh@1187 -- # return 0 00:24:15.748 18:01:19 -- target/initiator_timeout.sh@35 -- # fio_pid=1757791 00:24:15.748 18:01:19 -- target/initiator_timeout.sh@37 -- # sleep 3 00:24:15.748 18:01:19 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:24:15.748 [global] 00:24:15.748 thread=1 00:24:15.748 invalidate=1 00:24:15.748 rw=write 00:24:15.748 time_based=1 00:24:15.748 runtime=60 00:24:15.748 ioengine=libaio 00:24:15.748 direct=1 00:24:15.748 bs=4096 00:24:15.748 iodepth=1 00:24:15.748 norandommap=0 00:24:15.748 numjobs=1 00:24:15.748 00:24:15.748 verify_dump=1 00:24:15.748 verify_backlog=512 00:24:15.748 verify_state_save=0 00:24:15.748 do_verify=1 00:24:15.748 verify=crc32c-intel 00:24:15.748 [job0] 00:24:15.748 filename=/dev/nvme0n1 00:24:15.748 Could not set queue depth (nvme0n1) 00:24:15.748 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:15.748 fio-3.35 00:24:15.748 Starting 1 thread 00:24:18.295 18:01:22 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:24:18.295 18:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.295 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:18.295 true 00:24:18.295 18:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.295 18:01:22 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:24:18.295 18:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.295 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:18.295 true 00:24:18.295 18:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.295 18:01:22 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:24:18.295 18:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.295 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:18.295 true 00:24:18.295 18:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.295 18:01:22 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:24:18.295 18:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.295 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:24:18.295 true 00:24:18.295 18:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.295 18:01:22 -- target/initiator_timeout.sh@45 -- # sleep 3 00:24:21.590 18:01:25 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:24:21.590 18:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.590 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.590 true 00:24:21.590 18:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.590 18:01:25 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:24:21.590 18:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.590 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.590 true 00:24:21.590 18:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.590 18:01:25 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:24:21.590 18:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.590 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.590 true 00:24:21.590 18:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.590 18:01:25 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:24:21.590 18:01:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.590 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:24:21.590 true 00:24:21.590 18:01:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.590 18:01:25 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:24:21.590 18:01:25 -- target/initiator_timeout.sh@54 -- # wait 1757791 00:25:17.853 00:25:17.853 job0: (groupid=0, jobs=1): err= 0: pid=1757967: Mon Jul 22 18:02:19 2024 00:25:17.853 read: IOPS=31, BW=127KiB/s (130kB/s)(7648KiB/60030msec) 00:25:17.853 slat (nsec): min=6156, max=59326, avg=24357.82, stdev=5103.36 00:25:17.853 clat (usec): min=352, max=44010, avg=8962.30, stdev=16351.32 00:25:17.853 lat (usec): min=358, max=44040, avg=8986.66, stdev=16351.84 00:25:17.853 clat percentiles (usec): 00:25:17.853 | 1.00th=[ 553], 5.00th=[ 635], 10.00th=[ 685], 20.00th=[ 750], 00:25:17.853 | 30.00th=[ 791], 40.00th=[ 857], 50.00th=[ 906], 60.00th=[ 971], 00:25:17.853 | 70.00th=[ 1012], 80.00th=[ 1139], 90.00th=[42206], 95.00th=[42206], 00:25:17.853 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43779], 00:25:17.853 | 99.99th=[43779] 00:25:17.853 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60030msec); 0 zone resets 00:25:17.853 slat (usec): min=9, max=32255, avg=53.55, stdev=756.77 00:25:17.853 clat (usec): min=164, max=41687k, avg=20852.36, stdev=921151.04 00:25:17.853 lat (usec): min=174, max=41687k, avg=20905.91, stdev=921150.41 00:25:17.853 clat percentiles (usec): 00:25:17.853 | 1.00th=[ 231], 5.00th=[ 318], 10.00th=[ 347], 00:25:17.853 | 20.00th=[ 416], 30.00th=[ 441], 40.00th=[ 465], 00:25:17.853 | 50.00th=[ 498], 60.00th=[ 529], 70.00th=[ 553], 00:25:17.853 | 80.00th=[ 586], 90.00th=[ 627], 95.00th=[ 668], 00:25:17.853 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 3294], 00:25:17.853 | 99.95th=[ 4359], 99.99th=[17112761] 00:25:17.853 bw ( KiB/s): min= 1104, max= 4096, per=100.00%, avg=2730.67, stdev=1344.48, samples=6 00:25:17.853 iops : min= 276, max= 1024, avg=682.67, stdev=336.12, samples=6 00:25:17.853 lat (usec) : 250=0.88%, 500=25.43%, 750=35.25%, 1000=22.30% 00:25:17.853 lat (msec) : 2=6.52%, 4=0.03%, 10=0.03%, 50=9.55%, >=2000=0.03% 00:25:17.853 cpu : usr=0.11%, sys=0.20%, ctx=3965, majf=0, minf=1 00:25:17.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:17.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.853 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:17.853 00:25:17.853 Run status group 0 (all jobs): 00:25:17.853 READ: bw=127KiB/s (130kB/s), 127KiB/s-127KiB/s (130kB/s-130kB/s), io=7648KiB (7832kB), run=60030-60030msec 00:25:17.853 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60030-60030msec 00:25:17.853 00:25:17.853 Disk stats (read/write): 00:25:17.853 nvme0n1: ios=1961/2048, merge=0/0, ticks=18383/953, in_queue=19336, util=99.82% 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:17.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:17.853 18:02:20 -- common/autotest_common.sh@1198 -- # local i=0 00:25:17.853 18:02:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:25:17.853 18:02:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:17.853 18:02:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:25:17.853 18:02:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:17.853 18:02:20 -- common/autotest_common.sh@1210 -- # return 0 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:25:17.853 nvmf hotplug test: fio successful as expected 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.853 18:02:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.853 18:02:20 -- common/autotest_common.sh@10 -- # set +x 00:25:17.853 18:02:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:25:17.853 18:02:20 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:25:17.853 18:02:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.853 18:02:20 -- nvmf/common.sh@116 -- # sync 00:25:17.853 18:02:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.853 18:02:20 -- nvmf/common.sh@119 -- # set +e 00:25:17.853 18:02:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.853 18:02:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.853 rmmod nvme_tcp 00:25:17.853 rmmod nvme_fabrics 00:25:17.853 rmmod nvme_keyring 00:25:17.853 18:02:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.853 18:02:20 -- nvmf/common.sh@123 -- # set -e 00:25:17.853 18:02:20 -- nvmf/common.sh@124 -- # return 0 00:25:17.853 18:02:20 -- nvmf/common.sh@477 -- # '[' -n 1757128 ']' 00:25:17.853 18:02:20 -- nvmf/common.sh@478 -- # killprocess 1757128 00:25:17.853 18:02:20 -- common/autotest_common.sh@926 -- # '[' -z 1757128 ']' 00:25:17.853 18:02:20 -- common/autotest_common.sh@930 -- # kill -0 1757128 00:25:17.853 18:02:20 -- common/autotest_common.sh@931 -- # uname 00:25:17.853 18:02:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:17.853 18:02:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1757128 00:25:17.853 18:02:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:17.853 18:02:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:17.853 18:02:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1757128' 00:25:17.853 killing process with pid 1757128 00:25:17.853 18:02:20 -- common/autotest_common.sh@945 -- # kill 1757128 00:25:17.853 18:02:20 -- common/autotest_common.sh@950 -- # wait 1757128 00:25:17.853 18:02:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.853 18:02:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.853 18:02:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.853 18:02:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.853 18:02:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.853 18:02:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.853 18:02:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.853 18:02:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.426 18:02:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:18.426 00:25:18.426 real 1m15.875s 00:25:18.426 user 4m31.097s 00:25:18.426 sys 0m7.791s 00:25:18.426 18:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.426 18:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:18.426 ************************************ 00:25:18.426 END TEST nvmf_initiator_timeout 00:25:18.426 ************************************ 00:25:18.426 18:02:22 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:25:18.426 18:02:22 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:25:18.426 18:02:22 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:25:18.426 18:02:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:18.426 18:02:22 -- common/autotest_common.sh@10 -- # set +x 00:25:26.570 18:02:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:26.570 18:02:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:26.570 18:02:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:26.570 18:02:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:26.570 18:02:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:26.570 18:02:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:26.570 18:02:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:26.570 18:02:29 -- nvmf/common.sh@294 -- # net_devs=() 00:25:26.570 18:02:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:26.570 18:02:29 -- nvmf/common.sh@295 -- # e810=() 00:25:26.570 18:02:29 -- nvmf/common.sh@295 -- # local -ga e810 00:25:26.570 18:02:29 -- nvmf/common.sh@296 -- # x722=() 00:25:26.570 18:02:29 -- nvmf/common.sh@296 -- # local -ga x722 00:25:26.570 18:02:29 -- nvmf/common.sh@297 -- # mlx=() 00:25:26.570 18:02:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:26.570 18:02:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.570 18:02:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:26.570 18:02:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:26.570 18:02:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:26.570 18:02:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.570 18:02:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:26.570 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:26.570 18:02:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:26.570 18:02:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:26.570 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:26.570 18:02:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:26.570 18:02:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:26.570 18:02:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.570 18:02:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.570 18:02:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.570 18:02:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.570 18:02:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:26.570 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:26.570 18:02:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.570 18:02:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:26.571 18:02:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.571 18:02:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:26.571 18:02:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.571 18:02:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:26.571 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:26.571 18:02:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.571 18:02:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:26.571 18:02:29 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.571 18:02:29 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:25:26.571 18:02:29 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:26.571 18:02:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:26.571 18:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:26.571 18:02:29 -- common/autotest_common.sh@10 -- # set +x 00:25:26.571 ************************************ 00:25:26.571 START TEST nvmf_perf_adq 00:25:26.571 ************************************ 00:25:26.571 18:02:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:26.571 * Looking for test storage... 00:25:26.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.571 18:02:29 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.571 18:02:29 -- nvmf/common.sh@7 -- # uname -s 00:25:26.571 18:02:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.571 18:02:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.571 18:02:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.571 18:02:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.571 18:02:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.571 18:02:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.571 18:02:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.571 18:02:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.571 18:02:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.571 18:02:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.571 18:02:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:26.571 18:02:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:25:26.571 18:02:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.571 18:02:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.571 18:02:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.571 18:02:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.571 18:02:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.571 18:02:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.571 18:02:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.571 18:02:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.571 18:02:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.571 18:02:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.571 18:02:29 -- paths/export.sh@5 -- # export PATH 00:25:26.571 18:02:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.571 18:02:29 -- nvmf/common.sh@46 -- # : 0 00:25:26.571 18:02:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:26.571 18:02:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:26.571 18:02:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:26.571 18:02:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.571 18:02:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.571 18:02:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:26.571 18:02:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:26.571 18:02:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:26.571 18:02:29 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:26.571 18:02:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:26.571 18:02:29 -- common/autotest_common.sh@10 -- # set +x 00:25:34.715 18:02:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:34.715 18:02:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:34.715 18:02:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:34.715 18:02:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:34.715 18:02:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:34.715 18:02:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:34.715 18:02:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:34.715 18:02:37 -- nvmf/common.sh@294 -- # net_devs=() 00:25:34.715 18:02:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:34.715 18:02:37 -- nvmf/common.sh@295 -- # e810=() 00:25:34.715 18:02:37 -- nvmf/common.sh@295 -- # local -ga e810 00:25:34.715 18:02:37 -- nvmf/common.sh@296 -- # x722=() 00:25:34.715 18:02:37 -- nvmf/common.sh@296 -- # local -ga x722 00:25:34.715 18:02:37 -- nvmf/common.sh@297 -- # mlx=() 00:25:34.715 18:02:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:34.715 18:02:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.715 18:02:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:34.716 18:02:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:34.716 18:02:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:34.716 18:02:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:34.716 18:02:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:34.716 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:34.716 18:02:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:34.716 18:02:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:34.716 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:34.716 18:02:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:34.716 18:02:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:34.716 18:02:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:34.716 18:02:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.716 18:02:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:34.716 18:02:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.716 18:02:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:34.716 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:34.716 18:02:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.716 18:02:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:34.716 18:02:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.716 18:02:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:34.716 18:02:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.716 18:02:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:34.716 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:34.716 18:02:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.716 18:02:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:34.716 18:02:37 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.716 18:02:37 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:34.716 18:02:37 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:34.716 18:02:37 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:25:34.716 18:02:37 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:35.288 18:02:39 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:37.207 18:02:41 -- target/perf_adq.sh@54 -- # sleep 5 00:25:42.562 18:02:46 -- target/perf_adq.sh@67 -- # nvmftestinit 00:25:42.562 18:02:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:42.562 18:02:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.562 18:02:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:42.562 18:02:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:42.562 18:02:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:42.562 18:02:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.562 18:02:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.562 18:02:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.562 18:02:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:42.562 18:02:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:42.562 18:02:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:42.562 18:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.562 18:02:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:42.562 18:02:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:42.562 18:02:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:42.562 18:02:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:42.562 18:02:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:42.562 18:02:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:42.562 18:02:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:42.562 18:02:46 -- nvmf/common.sh@294 -- # net_devs=() 00:25:42.562 18:02:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:42.562 18:02:46 -- nvmf/common.sh@295 -- # e810=() 00:25:42.562 18:02:46 -- nvmf/common.sh@295 -- # local -ga e810 00:25:42.562 18:02:46 -- nvmf/common.sh@296 -- # x722=() 00:25:42.562 18:02:46 -- nvmf/common.sh@296 -- # local -ga x722 00:25:42.563 18:02:46 -- nvmf/common.sh@297 -- # mlx=() 00:25:42.563 18:02:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:42.563 18:02:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.563 18:02:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:42.563 18:02:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:42.563 18:02:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:42.563 18:02:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:42.563 18:02:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:42.563 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:42.563 18:02:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:42.563 18:02:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:42.563 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:42.563 18:02:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:42.563 18:02:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:42.563 18:02:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.563 18:02:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:42.563 18:02:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.563 18:02:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:42.563 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:42.563 18:02:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.563 18:02:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:42.563 18:02:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.563 18:02:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:42.563 18:02:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.563 18:02:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:42.563 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:42.563 18:02:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.563 18:02:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:42.563 18:02:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:42.563 18:02:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:42.563 18:02:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.563 18:02:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.563 18:02:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.563 18:02:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:42.563 18:02:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.563 18:02:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.563 18:02:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:42.563 18:02:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.563 18:02:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.563 18:02:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:42.563 18:02:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:42.563 18:02:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.563 18:02:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.563 18:02:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.563 18:02:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.563 18:02:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:42.563 18:02:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.563 18:02:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.563 18:02:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.563 18:02:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:42.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:25:42.563 00:25:42.563 --- 10.0.0.2 ping statistics --- 00:25:42.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.563 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:25:42.563 18:02:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:25:42.563 00:25:42.563 --- 10.0.0.1 ping statistics --- 00:25:42.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.563 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:25:42.563 18:02:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.563 18:02:46 -- nvmf/common.sh@410 -- # return 0 00:25:42.563 18:02:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:42.563 18:02:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.563 18:02:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:42.563 18:02:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.563 18:02:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:42.563 18:02:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:42.563 18:02:46 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:42.563 18:02:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:42.563 18:02:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:42.563 18:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.563 18:02:46 -- nvmf/common.sh@469 -- # nvmfpid=1777702 00:25:42.563 18:02:46 -- nvmf/common.sh@470 -- # waitforlisten 1777702 00:25:42.563 18:02:46 -- common/autotest_common.sh@819 -- # '[' -z 1777702 ']' 00:25:42.563 18:02:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:42.563 18:02:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.563 18:02:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:42.563 18:02:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.563 18:02:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:42.563 18:02:46 -- common/autotest_common.sh@10 -- # set +x 00:25:42.563 [2024-07-22 18:02:46.634399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:42.563 [2024-07-22 18:02:46.634469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.563 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.563 [2024-07-22 18:02:46.727846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.563 [2024-07-22 18:02:46.818814] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:42.563 [2024-07-22 18:02:46.818975] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.563 [2024-07-22 18:02:46.818983] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.563 [2024-07-22 18:02:46.818991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.563 [2024-07-22 18:02:46.819140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.563 [2024-07-22 18:02:46.819269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.563 [2024-07-22 18:02:46.819419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.563 [2024-07-22 18:02:46.819423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.506 18:02:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:43.506 18:02:47 -- common/autotest_common.sh@852 -- # return 0 00:25:43.506 18:02:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:43.506 18:02:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:43.506 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.506 18:02:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.506 18:02:47 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:25:43.506 18:02:47 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:43.506 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.506 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.506 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.506 18:02:47 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:43.506 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.506 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.506 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.507 18:02:47 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:43.507 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.507 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.507 [2024-07-22 18:02:47.616274] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.507 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.507 18:02:47 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:43.507 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.507 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.507 Malloc1 00:25:43.507 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.507 18:02:47 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.507 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.507 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.507 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.507 18:02:47 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:43.507 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.507 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.507 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.507 18:02:47 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.507 18:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.507 18:02:47 -- common/autotest_common.sh@10 -- # set +x 00:25:43.507 [2024-07-22 18:02:47.668565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.507 18:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.507 18:02:47 -- target/perf_adq.sh@73 -- # perfpid=1777988 00:25:43.507 18:02:47 -- target/perf_adq.sh@74 -- # sleep 2 00:25:43.507 18:02:47 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:43.507 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.417 18:02:49 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:25:45.417 18:02:49 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:45.417 18:02:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.417 18:02:49 -- target/perf_adq.sh@76 -- # wc -l 00:25:45.417 18:02:49 -- common/autotest_common.sh@10 -- # set +x 00:25:45.678 18:02:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.678 18:02:49 -- target/perf_adq.sh@76 -- # count=4 00:25:45.678 18:02:49 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:25:45.678 18:02:49 -- target/perf_adq.sh@81 -- # wait 1777988 00:25:53.811 Initializing NVMe Controllers 00:25:53.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:53.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:53.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:53.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:53.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:53.811 Initialization complete. Launching workers. 00:25:53.811 ======================================================== 00:25:53.811 Latency(us) 00:25:53.811 Device Information : IOPS MiB/s Average min max 00:25:53.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8899.31 34.76 7192.25 3834.42 13682.94 00:25:53.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14820.32 57.89 4318.72 1136.76 44478.97 00:25:53.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12548.27 49.02 5101.11 1564.00 13540.10 00:25:53.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12178.80 47.57 5254.97 1288.06 13854.06 00:25:53.811 ======================================================== 00:25:53.811 Total : 48446.71 189.24 5284.57 1136.76 44478.97 00:25:53.811 00:25:53.811 18:02:57 -- target/perf_adq.sh@82 -- # nvmftestfini 00:25:53.811 18:02:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:53.811 18:02:57 -- nvmf/common.sh@116 -- # sync 00:25:53.811 18:02:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:53.811 18:02:57 -- nvmf/common.sh@119 -- # set +e 00:25:53.811 18:02:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:53.811 18:02:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:53.811 rmmod nvme_tcp 00:25:53.811 rmmod nvme_fabrics 00:25:53.811 rmmod nvme_keyring 00:25:53.811 18:02:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:53.811 18:02:57 -- nvmf/common.sh@123 -- # set -e 00:25:53.811 18:02:57 -- nvmf/common.sh@124 -- # return 0 00:25:53.811 18:02:57 -- nvmf/common.sh@477 -- # '[' -n 1777702 ']' 00:25:53.811 18:02:57 -- nvmf/common.sh@478 -- # killprocess 1777702 00:25:53.811 18:02:57 -- common/autotest_common.sh@926 -- # '[' -z 1777702 ']' 00:25:53.811 18:02:57 -- common/autotest_common.sh@930 -- # kill -0 1777702 00:25:53.811 18:02:57 -- common/autotest_common.sh@931 -- # uname 00:25:53.811 18:02:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:53.811 18:02:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1777702 00:25:53.811 18:02:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:53.811 18:02:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:53.811 18:02:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1777702' 00:25:53.811 killing process with pid 1777702 00:25:53.811 18:02:57 -- common/autotest_common.sh@945 -- # kill 1777702 00:25:53.811 18:02:57 -- common/autotest_common.sh@950 -- # wait 1777702 00:25:54.071 18:02:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:54.071 18:02:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:54.071 18:02:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:54.071 18:02:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:54.071 18:02:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:54.071 18:02:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.071 18:02:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.071 18:02:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.982 18:03:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:55.982 18:03:00 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:55.982 18:03:00 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:57.893 18:03:01 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:59.804 18:03:03 -- target/perf_adq.sh@54 -- # sleep 5 00:26:05.089 18:03:08 -- target/perf_adq.sh@87 -- # nvmftestinit 00:26:05.089 18:03:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:05.089 18:03:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.089 18:03:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:05.089 18:03:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:05.089 18:03:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:05.089 18:03:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.089 18:03:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.089 18:03:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.089 18:03:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:05.089 18:03:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:05.089 18:03:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:05.089 18:03:08 -- common/autotest_common.sh@10 -- # set +x 00:26:05.089 18:03:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:05.089 18:03:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:05.089 18:03:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:05.089 18:03:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:05.089 18:03:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:05.089 18:03:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:05.089 18:03:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:05.089 18:03:08 -- nvmf/common.sh@294 -- # net_devs=() 00:26:05.089 18:03:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:05.089 18:03:08 -- nvmf/common.sh@295 -- # e810=() 00:26:05.089 18:03:08 -- nvmf/common.sh@295 -- # local -ga e810 00:26:05.089 18:03:08 -- nvmf/common.sh@296 -- # x722=() 00:26:05.089 18:03:08 -- nvmf/common.sh@296 -- # local -ga x722 00:26:05.089 18:03:08 -- nvmf/common.sh@297 -- # mlx=() 00:26:05.089 18:03:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:05.089 18:03:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.089 18:03:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.090 18:03:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.090 18:03:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.090 18:03:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:05.090 18:03:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:05.090 18:03:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:05.090 18:03:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:05.090 18:03:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:05.090 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:05.090 18:03:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:05.090 18:03:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:05.090 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:05.090 18:03:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:05.090 18:03:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:05.090 18:03:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.090 18:03:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:05.090 18:03:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.090 18:03:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:05.090 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:05.090 18:03:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.090 18:03:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:05.090 18:03:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.090 18:03:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:05.090 18:03:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.090 18:03:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:05.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:05.090 18:03:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.090 18:03:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:05.090 18:03:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:05.090 18:03:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:05.090 18:03:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:05.090 18:03:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.090 18:03:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.090 18:03:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.090 18:03:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:05.090 18:03:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.090 18:03:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.090 18:03:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:05.090 18:03:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.090 18:03:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.090 18:03:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:05.090 18:03:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:05.090 18:03:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.090 18:03:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.090 18:03:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.090 18:03:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.090 18:03:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:05.090 18:03:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.090 18:03:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.090 18:03:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.090 18:03:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:05.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:26:05.090 00:26:05.090 --- 10.0.0.2 ping statistics --- 00:26:05.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.090 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:26:05.090 18:03:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:26:05.090 00:26:05.090 --- 10.0.0.1 ping statistics --- 00:26:05.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.090 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:26:05.090 18:03:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.090 18:03:09 -- nvmf/common.sh@410 -- # return 0 00:26:05.090 18:03:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:05.090 18:03:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.090 18:03:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:05.090 18:03:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:05.090 18:03:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.090 18:03:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:05.090 18:03:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:05.090 18:03:09 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:26:05.090 18:03:09 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:05.090 18:03:09 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:05.090 18:03:09 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:05.090 net.core.busy_poll = 1 00:26:05.090 18:03:09 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:05.090 net.core.busy_read = 1 00:26:05.090 18:03:09 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:05.090 18:03:09 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:05.090 18:03:09 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:05.090 18:03:09 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:05.090 18:03:09 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:05.353 18:03:09 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:05.353 18:03:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:05.353 18:03:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:05.353 18:03:09 -- common/autotest_common.sh@10 -- # set +x 00:26:05.353 18:03:09 -- nvmf/common.sh@469 -- # nvmfpid=1782147 00:26:05.353 18:03:09 -- nvmf/common.sh@470 -- # waitforlisten 1782147 00:26:05.353 18:03:09 -- common/autotest_common.sh@819 -- # '[' -z 1782147 ']' 00:26:05.353 18:03:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:05.353 18:03:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.353 18:03:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:05.353 18:03:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.353 18:03:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:05.353 18:03:09 -- common/autotest_common.sh@10 -- # set +x 00:26:05.353 [2024-07-22 18:03:09.434100] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:05.353 [2024-07-22 18:03:09.434161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.353 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.353 [2024-07-22 18:03:09.527654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.353 [2024-07-22 18:03:09.621480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:05.353 [2024-07-22 18:03:09.621641] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.353 [2024-07-22 18:03:09.621650] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.353 [2024-07-22 18:03:09.621665] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.353 [2024-07-22 18:03:09.621817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.353 [2024-07-22 18:03:09.621911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.353 [2024-07-22 18:03:09.622010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.353 [2024-07-22 18:03:09.622013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.293 18:03:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:06.293 18:03:10 -- common/autotest_common.sh@852 -- # return 0 00:26:06.293 18:03:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:06.293 18:03:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:06.293 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.293 18:03:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.293 18:03:10 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:26:06.293 18:03:10 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:06.293 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.293 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.293 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.293 18:03:10 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:26:06.293 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.293 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.293 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.294 18:03:10 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:06.294 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.294 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.294 [2024-07-22 18:03:10.391199] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.294 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.294 18:03:10 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:06.294 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.294 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.294 Malloc1 00:26:06.294 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.294 18:03:10 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:06.294 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.294 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.294 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.294 18:03:10 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:06.294 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.294 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.294 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.294 18:03:10 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.294 18:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.294 18:03:10 -- common/autotest_common.sh@10 -- # set +x 00:26:06.294 [2024-07-22 18:03:10.443506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.294 18:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.294 18:03:10 -- target/perf_adq.sh@94 -- # perfpid=1782609 00:26:06.294 18:03:10 -- target/perf_adq.sh@95 -- # sleep 2 00:26:06.294 18:03:10 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:06.294 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.201 18:03:12 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:26:08.201 18:03:12 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:08.201 18:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:08.201 18:03:12 -- target/perf_adq.sh@97 -- # wc -l 00:26:08.201 18:03:12 -- common/autotest_common.sh@10 -- # set +x 00:26:08.201 18:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:08.460 18:03:12 -- target/perf_adq.sh@97 -- # count=2 00:26:08.460 18:03:12 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:26:08.460 18:03:12 -- target/perf_adq.sh@103 -- # wait 1782609 00:26:16.686 Initializing NVMe Controllers 00:26:16.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:16.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:16.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:16.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:16.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:16.686 Initialization complete. Launching workers. 00:26:16.686 ======================================================== 00:26:16.686 Latency(us) 00:26:16.686 Device Information : IOPS MiB/s Average min max 00:26:16.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6191.10 24.18 10340.07 3144.01 51578.12 00:26:16.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9263.70 36.19 6930.22 969.41 51004.54 00:26:16.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8379.20 32.73 7639.21 1014.25 51000.12 00:26:16.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13983.19 54.62 4576.63 989.25 51878.33 00:26:16.686 ======================================================== 00:26:16.686 Total : 37817.18 147.72 6775.28 969.41 51878.33 00:26:16.686 00:26:16.686 18:03:20 -- target/perf_adq.sh@104 -- # nvmftestfini 00:26:16.686 18:03:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:16.686 18:03:20 -- nvmf/common.sh@116 -- # sync 00:26:16.686 18:03:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:16.686 18:03:20 -- nvmf/common.sh@119 -- # set +e 00:26:16.686 18:03:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:16.686 18:03:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:16.686 rmmod nvme_tcp 00:26:16.686 rmmod nvme_fabrics 00:26:16.686 rmmod nvme_keyring 00:26:16.686 18:03:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:16.686 18:03:20 -- nvmf/common.sh@123 -- # set -e 00:26:16.686 18:03:20 -- nvmf/common.sh@124 -- # return 0 00:26:16.686 18:03:20 -- nvmf/common.sh@477 -- # '[' -n 1782147 ']' 00:26:16.686 18:03:20 -- nvmf/common.sh@478 -- # killprocess 1782147 00:26:16.686 18:03:20 -- common/autotest_common.sh@926 -- # '[' -z 1782147 ']' 00:26:16.686 18:03:20 -- common/autotest_common.sh@930 -- # kill -0 1782147 00:26:16.686 18:03:20 -- common/autotest_common.sh@931 -- # uname 00:26:16.686 18:03:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:16.686 18:03:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1782147 00:26:16.686 18:03:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:16.686 18:03:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:16.686 18:03:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1782147' 00:26:16.686 killing process with pid 1782147 00:26:16.686 18:03:20 -- common/autotest_common.sh@945 -- # kill 1782147 00:26:16.686 18:03:20 -- common/autotest_common.sh@950 -- # wait 1782147 00:26:16.686 18:03:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:16.686 18:03:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:16.686 18:03:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:16.686 18:03:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:16.686 18:03:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:16.686 18:03:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.686 18:03:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.686 18:03:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.986 18:03:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:19.986 18:03:23 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:26:19.986 00:26:19.986 real 0m54.173s 00:26:19.986 user 2m48.066s 00:26:19.986 sys 0m11.894s 00:26:19.986 18:03:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.986 18:03:23 -- common/autotest_common.sh@10 -- # set +x 00:26:19.986 ************************************ 00:26:19.986 END TEST nvmf_perf_adq 00:26:19.986 ************************************ 00:26:19.986 18:03:24 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:19.986 18:03:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:19.986 18:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.986 18:03:24 -- common/autotest_common.sh@10 -- # set +x 00:26:19.986 ************************************ 00:26:19.986 START TEST nvmf_shutdown 00:26:19.986 ************************************ 00:26:19.986 18:03:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:19.986 * Looking for test storage... 00:26:19.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:19.986 18:03:24 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.986 18:03:24 -- nvmf/common.sh@7 -- # uname -s 00:26:19.986 18:03:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.986 18:03:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.986 18:03:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.986 18:03:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.986 18:03:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.986 18:03:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.986 18:03:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.986 18:03:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.986 18:03:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.986 18:03:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.986 18:03:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:19.986 18:03:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:19.986 18:03:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.986 18:03:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.986 18:03:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.986 18:03:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.986 18:03:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.986 18:03:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.986 18:03:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.986 18:03:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.986 18:03:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.986 18:03:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.986 18:03:24 -- paths/export.sh@5 -- # export PATH 00:26:19.986 18:03:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.986 18:03:24 -- nvmf/common.sh@46 -- # : 0 00:26:19.986 18:03:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.986 18:03:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.986 18:03:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.986 18:03:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.986 18:03:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.986 18:03:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.986 18:03:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.986 18:03:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.987 18:03:24 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:19.987 18:03:24 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:19.987 18:03:24 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:19.987 18:03:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:19.987 18:03:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.987 18:03:24 -- common/autotest_common.sh@10 -- # set +x 00:26:19.987 ************************************ 00:26:19.987 START TEST nvmf_shutdown_tc1 00:26:19.987 ************************************ 00:26:19.987 18:03:24 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:26:19.987 18:03:24 -- target/shutdown.sh@74 -- # starttarget 00:26:19.987 18:03:24 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:19.987 18:03:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:19.987 18:03:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.987 18:03:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:19.987 18:03:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:19.987 18:03:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:19.987 18:03:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.987 18:03:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.987 18:03:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.987 18:03:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:19.987 18:03:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:19.987 18:03:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:19.987 18:03:24 -- common/autotest_common.sh@10 -- # set +x 00:26:28.212 18:03:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:28.212 18:03:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:28.212 18:03:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:28.212 18:03:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:28.212 18:03:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:28.212 18:03:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:28.212 18:03:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:28.212 18:03:31 -- nvmf/common.sh@294 -- # net_devs=() 00:26:28.212 18:03:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:28.212 18:03:31 -- nvmf/common.sh@295 -- # e810=() 00:26:28.212 18:03:31 -- nvmf/common.sh@295 -- # local -ga e810 00:26:28.212 18:03:31 -- nvmf/common.sh@296 -- # x722=() 00:26:28.212 18:03:31 -- nvmf/common.sh@296 -- # local -ga x722 00:26:28.212 18:03:31 -- nvmf/common.sh@297 -- # mlx=() 00:26:28.212 18:03:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:28.212 18:03:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.212 18:03:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:28.212 18:03:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:28.212 18:03:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:28.212 18:03:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.212 18:03:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:28.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:28.212 18:03:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.212 18:03:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:28.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:28.212 18:03:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:28.212 18:03:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:28.212 18:03:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.212 18:03:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.212 18:03:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.212 18:03:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.212 18:03:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:28.212 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:28.212 18:03:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.212 18:03:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.212 18:03:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.212 18:03:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.212 18:03:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.212 18:03:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:28.212 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:28.212 18:03:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.212 18:03:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:28.212 18:03:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:28.212 18:03:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:28.212 18:03:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:28.212 18:03:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:28.212 18:03:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.212 18:03:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.212 18:03:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.212 18:03:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:28.212 18:03:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.212 18:03:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.212 18:03:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:28.212 18:03:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.212 18:03:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.212 18:03:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:28.212 18:03:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:28.212 18:03:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.212 18:03:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.212 18:03:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.212 18:03:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.212 18:03:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:28.212 18:03:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.212 18:03:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.212 18:03:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.212 18:03:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:28.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:26:28.212 00:26:28.212 --- 10.0.0.2 ping statistics --- 00:26:28.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.212 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:26:28.212 18:03:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:26:28.212 00:26:28.212 --- 10.0.0.1 ping statistics --- 00:26:28.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.212 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:26:28.212 18:03:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.212 18:03:32 -- nvmf/common.sh@410 -- # return 0 00:26:28.212 18:03:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:28.212 18:03:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.212 18:03:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:28.212 18:03:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:28.212 18:03:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.212 18:03:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:28.212 18:03:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:28.212 18:03:32 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:28.212 18:03:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:28.212 18:03:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.212 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:28.212 18:03:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:28.212 18:03:32 -- nvmf/common.sh@469 -- # nvmfpid=1788950 00:26:28.212 18:03:32 -- nvmf/common.sh@470 -- # waitforlisten 1788950 00:26:28.212 18:03:32 -- common/autotest_common.sh@819 -- # '[' -z 1788950 ']' 00:26:28.212 18:03:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.212 18:03:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.212 18:03:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.212 18:03:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.212 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:28.212 [2024-07-22 18:03:32.375965] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:28.212 [2024-07-22 18:03:32.376017] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.212 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.212 [2024-07-22 18:03:32.446935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.473 [2024-07-22 18:03:32.509708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:28.473 [2024-07-22 18:03:32.509832] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.473 [2024-07-22 18:03:32.509842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.473 [2024-07-22 18:03:32.509849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.473 [2024-07-22 18:03:32.509981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.473 [2024-07-22 18:03:32.510091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.473 [2024-07-22 18:03:32.513385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:28.473 [2024-07-22 18:03:32.513553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.473 18:03:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:28.473 18:03:32 -- common/autotest_common.sh@852 -- # return 0 00:26:28.473 18:03:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:28.473 18:03:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:28.473 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 18:03:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.473 18:03:32 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:28.473 18:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.473 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 [2024-07-22 18:03:32.704935] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.473 18:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.473 18:03:32 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:28.473 18:03:32 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:28.473 18:03:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.473 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:28.473 18:03:32 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:28.473 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.473 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.473 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.473 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.473 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.473 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.473 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.473 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.473 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.473 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.473 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.473 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.733 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.734 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.734 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.734 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.734 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.734 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.734 18:03:32 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:28.734 18:03:32 -- target/shutdown.sh@28 -- # cat 00:26:28.734 18:03:32 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:28.734 18:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.734 18:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:28.734 Malloc1 00:26:28.734 [2024-07-22 18:03:32.808170] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.734 Malloc2 00:26:28.734 Malloc3 00:26:28.734 Malloc4 00:26:28.734 Malloc5 00:26:28.734 Malloc6 00:26:28.994 Malloc7 00:26:28.994 Malloc8 00:26:28.994 Malloc9 00:26:28.994 Malloc10 00:26:28.994 18:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.994 18:03:33 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:28.994 18:03:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:28.994 18:03:33 -- common/autotest_common.sh@10 -- # set +x 00:26:28.994 18:03:33 -- target/shutdown.sh@78 -- # perfpid=1789265 00:26:28.994 18:03:33 -- target/shutdown.sh@79 -- # waitforlisten 1789265 /var/tmp/bdevperf.sock 00:26:28.994 18:03:33 -- common/autotest_common.sh@819 -- # '[' -z 1789265 ']' 00:26:28.994 18:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:28.994 18:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.994 18:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:28.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:28.994 18:03:33 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:28.994 18:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.994 18:03:33 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:28.994 18:03:33 -- common/autotest_common.sh@10 -- # set +x 00:26:28.994 18:03:33 -- nvmf/common.sh@520 -- # config=() 00:26:28.994 18:03:33 -- nvmf/common.sh@520 -- # local subsystem config 00:26:28.994 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.994 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.994 { 00:26:28.994 "params": { 00:26:28.994 "name": "Nvme$subsystem", 00:26:28.994 "trtype": "$TEST_TRANSPORT", 00:26:28.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.994 "adrfam": "ipv4", 00:26:28.994 "trsvcid": "$NVMF_PORT", 00:26:28.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.994 "hdgst": ${hdgst:-false}, 00:26:28.994 "ddgst": ${ddgst:-false} 00:26:28.994 }, 00:26:28.994 "method": "bdev_nvme_attach_controller" 00:26:28.994 } 00:26:28.994 EOF 00:26:28.994 )") 00:26:28.994 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 [2024-07-22 18:03:33.250388] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:28.995 [2024-07-22 18:03:33.250438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:28.995 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:28.995 { 00:26:28.995 "params": { 00:26:28.995 "name": "Nvme$subsystem", 00:26:28.995 "trtype": "$TEST_TRANSPORT", 00:26:28.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.995 "adrfam": "ipv4", 00:26:28.995 "trsvcid": "$NVMF_PORT", 00:26:28.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.995 "hdgst": ${hdgst:-false}, 00:26:28.995 "ddgst": ${ddgst:-false} 00:26:28.995 }, 00:26:28.995 "method": "bdev_nvme_attach_controller" 00:26:28.995 } 00:26:28.995 EOF 00:26:28.995 )") 00:26:28.995 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:29.255 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:29.255 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:29.255 { 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme$subsystem", 00:26:29.255 "trtype": "$TEST_TRANSPORT", 00:26:29.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "$NVMF_PORT", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.255 "hdgst": ${hdgst:-false}, 00:26:29.255 "ddgst": ${ddgst:-false} 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 } 00:26:29.255 EOF 00:26:29.255 )") 00:26:29.255 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:29.255 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.255 18:03:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:29.255 18:03:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:29.255 { 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme$subsystem", 00:26:29.255 "trtype": "$TEST_TRANSPORT", 00:26:29.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "$NVMF_PORT", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.255 "hdgst": ${hdgst:-false}, 00:26:29.255 "ddgst": ${ddgst:-false} 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 } 00:26:29.255 EOF 00:26:29.255 )") 00:26:29.255 18:03:33 -- nvmf/common.sh@542 -- # cat 00:26:29.255 18:03:33 -- nvmf/common.sh@544 -- # jq . 00:26:29.255 18:03:33 -- nvmf/common.sh@545 -- # IFS=, 00:26:29.255 18:03:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme1", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme2", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme3", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme4", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme5", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme6", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme7", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme8", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme9", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 },{ 00:26:29.255 "params": { 00:26:29.255 "name": "Nvme10", 00:26:29.255 "trtype": "tcp", 00:26:29.255 "traddr": "10.0.0.2", 00:26:29.255 "adrfam": "ipv4", 00:26:29.255 "trsvcid": "4420", 00:26:29.255 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:29.255 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:29.255 "hdgst": false, 00:26:29.255 "ddgst": false 00:26:29.255 }, 00:26:29.255 "method": "bdev_nvme_attach_controller" 00:26:29.255 }' 00:26:29.255 [2024-07-22 18:03:33.332374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.256 [2024-07-22 18:03:33.392881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.639 18:03:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:30.639 18:03:34 -- common/autotest_common.sh@852 -- # return 0 00:26:30.639 18:03:34 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:30.639 18:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:30.639 18:03:34 -- common/autotest_common.sh@10 -- # set +x 00:26:30.639 18:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:30.639 18:03:34 -- target/shutdown.sh@83 -- # kill -9 1789265 00:26:30.639 18:03:34 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:30.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1789265 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:30.639 18:03:34 -- target/shutdown.sh@87 -- # sleep 1 00:26:31.582 18:03:35 -- target/shutdown.sh@88 -- # kill -0 1788950 00:26:31.582 18:03:35 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:31.582 18:03:35 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:31.582 18:03:35 -- nvmf/common.sh@520 -- # config=() 00:26:31.582 18:03:35 -- nvmf/common.sh@520 -- # local subsystem config 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 [2024-07-22 18:03:35.838395] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:31.582 [2024-07-22 18:03:35.838449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789629 ] 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.582 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.582 { 00:26:31.582 "params": { 00:26:31.582 "name": "Nvme$subsystem", 00:26:31.582 "trtype": "$TEST_TRANSPORT", 00:26:31.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.582 "adrfam": "ipv4", 00:26:31.582 "trsvcid": "$NVMF_PORT", 00:26:31.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.582 "hdgst": ${hdgst:-false}, 00:26:31.582 "ddgst": ${ddgst:-false} 00:26:31.582 }, 00:26:31.582 "method": "bdev_nvme_attach_controller" 00:26:31.582 } 00:26:31.582 EOF 00:26:31.582 )") 00:26:31.582 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.843 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.843 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.843 { 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme$subsystem", 00:26:31.843 "trtype": "$TEST_TRANSPORT", 00:26:31.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "$NVMF_PORT", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.843 "hdgst": ${hdgst:-false}, 00:26:31.843 "ddgst": ${ddgst:-false} 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 } 00:26:31.843 EOF 00:26:31.843 )") 00:26:31.843 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.843 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.843 18:03:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:31.843 18:03:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:31.843 { 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme$subsystem", 00:26:31.843 "trtype": "$TEST_TRANSPORT", 00:26:31.843 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "$NVMF_PORT", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.843 "hdgst": ${hdgst:-false}, 00:26:31.843 "ddgst": ${ddgst:-false} 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 } 00:26:31.843 EOF 00:26:31.843 )") 00:26:31.843 18:03:35 -- nvmf/common.sh@542 -- # cat 00:26:31.843 18:03:35 -- nvmf/common.sh@544 -- # jq . 00:26:31.843 18:03:35 -- nvmf/common.sh@545 -- # IFS=, 00:26:31.843 18:03:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme1", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme2", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme3", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme4", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme5", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme6", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme7", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme8", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.843 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:31.843 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:31.843 "hdgst": false, 00:26:31.843 "ddgst": false 00:26:31.843 }, 00:26:31.843 "method": "bdev_nvme_attach_controller" 00:26:31.843 },{ 00:26:31.843 "params": { 00:26:31.843 "name": "Nvme9", 00:26:31.843 "trtype": "tcp", 00:26:31.843 "traddr": "10.0.0.2", 00:26:31.843 "adrfam": "ipv4", 00:26:31.843 "trsvcid": "4420", 00:26:31.844 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:31.844 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:31.844 "hdgst": false, 00:26:31.844 "ddgst": false 00:26:31.844 }, 00:26:31.844 "method": "bdev_nvme_attach_controller" 00:26:31.844 },{ 00:26:31.844 "params": { 00:26:31.844 "name": "Nvme10", 00:26:31.844 "trtype": "tcp", 00:26:31.844 "traddr": "10.0.0.2", 00:26:31.844 "adrfam": "ipv4", 00:26:31.844 "trsvcid": "4420", 00:26:31.844 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:31.844 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:31.844 "hdgst": false, 00:26:31.844 "ddgst": false 00:26:31.844 }, 00:26:31.844 "method": "bdev_nvme_attach_controller" 00:26:31.844 }' 00:26:31.844 [2024-07-22 18:03:35.920770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.844 [2024-07-22 18:03:35.980404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.226 Running I/O for 1 seconds... 00:26:34.167 00:26:34.167 Latency(us) 00:26:34.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.167 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme1n1 : 1.08 450.26 28.14 0.00 0.00 138365.47 9225.45 106470.79 00:26:34.167 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme2n1 : 1.08 446.08 27.88 0.00 0.00 139378.24 29844.09 126635.72 00:26:34.167 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme3n1 : 1.05 415.03 25.94 0.00 0.00 149035.43 14014.62 145994.04 00:26:34.167 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme4n1 : 1.07 453.50 28.34 0.00 0.00 134157.65 6251.13 114536.76 00:26:34.167 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme5n1 : 1.09 477.78 29.86 0.00 0.00 128703.73 11897.30 111310.38 00:26:34.167 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme6n1 : 1.12 430.57 26.91 0.00 0.00 136422.16 13308.85 125829.12 00:26:34.167 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme7n1 : 1.09 452.61 28.29 0.00 0.00 134015.59 5343.70 129055.51 00:26:34.167 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme8n1 : 1.14 458.29 28.64 0.00 0.00 127168.96 8469.27 111310.38 00:26:34.167 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme9n1 : 1.10 476.11 29.76 0.00 0.00 126214.82 5520.15 111310.38 00:26:34.167 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:34.167 Verification LBA range: start 0x0 length 0x400 00:26:34.167 Nvme10n1 : 1.10 474.68 29.67 0.00 0.00 125576.91 7561.85 114536.76 00:26:34.167 =================================================================================================================== 00:26:34.167 Total : 4534.91 283.43 0.00 0.00 133542.84 5343.70 145994.04 00:26:34.427 18:03:38 -- target/shutdown.sh@93 -- # stoptarget 00:26:34.427 18:03:38 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:34.427 18:03:38 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:34.427 18:03:38 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:34.427 18:03:38 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:34.427 18:03:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:34.427 18:03:38 -- nvmf/common.sh@116 -- # sync 00:26:34.427 18:03:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:34.427 18:03:38 -- nvmf/common.sh@119 -- # set +e 00:26:34.427 18:03:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:34.427 18:03:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:34.427 rmmod nvme_tcp 00:26:34.427 rmmod nvme_fabrics 00:26:34.427 rmmod nvme_keyring 00:26:34.427 18:03:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:34.427 18:03:38 -- nvmf/common.sh@123 -- # set -e 00:26:34.427 18:03:38 -- nvmf/common.sh@124 -- # return 0 00:26:34.427 18:03:38 -- nvmf/common.sh@477 -- # '[' -n 1788950 ']' 00:26:34.427 18:03:38 -- nvmf/common.sh@478 -- # killprocess 1788950 00:26:34.427 18:03:38 -- common/autotest_common.sh@926 -- # '[' -z 1788950 ']' 00:26:34.427 18:03:38 -- common/autotest_common.sh@930 -- # kill -0 1788950 00:26:34.428 18:03:38 -- common/autotest_common.sh@931 -- # uname 00:26:34.428 18:03:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:34.428 18:03:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1788950 00:26:34.428 18:03:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:34.428 18:03:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:34.428 18:03:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1788950' 00:26:34.428 killing process with pid 1788950 00:26:34.428 18:03:38 -- common/autotest_common.sh@945 -- # kill 1788950 00:26:34.428 18:03:38 -- common/autotest_common.sh@950 -- # wait 1788950 00:26:34.688 18:03:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:34.688 18:03:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:34.688 18:03:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:34.688 18:03:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.688 18:03:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:34.688 18:03:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.688 18:03:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.688 18:03:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.233 18:03:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:37.233 00:26:37.233 real 0m16.765s 00:26:37.233 user 0m31.501s 00:26:37.233 sys 0m7.353s 00:26:37.233 18:03:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.233 18:03:40 -- common/autotest_common.sh@10 -- # set +x 00:26:37.233 ************************************ 00:26:37.233 END TEST nvmf_shutdown_tc1 00:26:37.233 ************************************ 00:26:37.233 18:03:40 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:37.233 18:03:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:37.233 18:03:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.233 18:03:40 -- common/autotest_common.sh@10 -- # set +x 00:26:37.233 ************************************ 00:26:37.233 START TEST nvmf_shutdown_tc2 00:26:37.233 ************************************ 00:26:37.233 18:03:40 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:26:37.233 18:03:40 -- target/shutdown.sh@98 -- # starttarget 00:26:37.233 18:03:40 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:37.233 18:03:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:37.233 18:03:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.233 18:03:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:37.233 18:03:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:37.233 18:03:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:37.233 18:03:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.233 18:03:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.233 18:03:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.233 18:03:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:37.233 18:03:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:37.233 18:03:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:37.233 18:03:40 -- common/autotest_common.sh@10 -- # set +x 00:26:37.233 18:03:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:37.233 18:03:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:37.233 18:03:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:37.233 18:03:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:37.233 18:03:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:37.233 18:03:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:37.233 18:03:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:37.233 18:03:40 -- nvmf/common.sh@294 -- # net_devs=() 00:26:37.233 18:03:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:37.233 18:03:41 -- nvmf/common.sh@295 -- # e810=() 00:26:37.233 18:03:41 -- nvmf/common.sh@295 -- # local -ga e810 00:26:37.233 18:03:41 -- nvmf/common.sh@296 -- # x722=() 00:26:37.233 18:03:41 -- nvmf/common.sh@296 -- # local -ga x722 00:26:37.233 18:03:41 -- nvmf/common.sh@297 -- # mlx=() 00:26:37.233 18:03:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:37.233 18:03:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.233 18:03:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:37.233 18:03:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:37.233 18:03:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:37.233 18:03:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:37.233 18:03:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:37.233 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:37.233 18:03:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:37.233 18:03:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:37.233 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:37.233 18:03:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:37.233 18:03:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:37.233 18:03:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:37.233 18:03:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.233 18:03:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:37.233 18:03:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.233 18:03:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:37.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:37.233 18:03:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.233 18:03:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:37.233 18:03:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.233 18:03:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:37.234 18:03:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.234 18:03:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:37.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:37.234 18:03:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.234 18:03:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:37.234 18:03:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:37.234 18:03:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:37.234 18:03:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:37.234 18:03:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:37.234 18:03:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.234 18:03:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.234 18:03:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.234 18:03:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:37.234 18:03:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.234 18:03:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.234 18:03:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:37.234 18:03:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.234 18:03:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.234 18:03:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:37.234 18:03:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:37.234 18:03:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.234 18:03:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.234 18:03:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.234 18:03:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.234 18:03:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:37.234 18:03:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.234 18:03:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.234 18:03:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.234 18:03:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:37.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:26:37.234 00:26:37.234 --- 10.0.0.2 ping statistics --- 00:26:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.234 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:26:37.234 18:03:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:26:37.234 00:26:37.234 --- 10.0.0.1 ping statistics --- 00:26:37.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.234 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:37.234 18:03:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.234 18:03:41 -- nvmf/common.sh@410 -- # return 0 00:26:37.234 18:03:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:37.234 18:03:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.234 18:03:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:37.234 18:03:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:37.234 18:03:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.234 18:03:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:37.234 18:03:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:37.234 18:03:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:37.234 18:03:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:37.234 18:03:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:37.234 18:03:41 -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 18:03:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:37.234 18:03:41 -- nvmf/common.sh@469 -- # nvmfpid=1790643 00:26:37.234 18:03:41 -- nvmf/common.sh@470 -- # waitforlisten 1790643 00:26:37.234 18:03:41 -- common/autotest_common.sh@819 -- # '[' -z 1790643 ']' 00:26:37.234 18:03:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.234 18:03:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:37.234 18:03:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.234 18:03:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:37.234 18:03:41 -- common/autotest_common.sh@10 -- # set +x 00:26:37.234 [2024-07-22 18:03:41.388050] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:37.234 [2024-07-22 18:03:41.388101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.234 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.234 [2024-07-22 18:03:41.453362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.495 [2024-07-22 18:03:41.515115] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:37.495 [2024-07-22 18:03:41.515241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.495 [2024-07-22 18:03:41.515250] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.495 [2024-07-22 18:03:41.515257] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.495 [2024-07-22 18:03:41.515385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.495 [2024-07-22 18:03:41.515504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:37.495 [2024-07-22 18:03:41.515638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.495 [2024-07-22 18:03:41.515639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:38.065 18:03:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:38.065 18:03:42 -- common/autotest_common.sh@852 -- # return 0 00:26:38.065 18:03:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:38.065 18:03:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:38.065 18:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.065 18:03:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.065 18:03:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:38.065 18:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.065 18:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.065 [2024-07-22 18:03:42.297880] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.065 18:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.065 18:03:42 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:38.065 18:03:42 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:38.065 18:03:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:38.065 18:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.066 18:03:42 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:38.066 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.066 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.066 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.066 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.066 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.066 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.066 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.066 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.066 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.066 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.066 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.066 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.326 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.326 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.326 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.326 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.326 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.326 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.326 18:03:42 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:38.326 18:03:42 -- target/shutdown.sh@28 -- # cat 00:26:38.326 18:03:42 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:38.326 18:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:38.326 18:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.326 Malloc1 00:26:38.326 [2024-07-22 18:03:42.400870] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.326 Malloc2 00:26:38.326 Malloc3 00:26:38.326 Malloc4 00:26:38.326 Malloc5 00:26:38.326 Malloc6 00:26:38.587 Malloc7 00:26:38.587 Malloc8 00:26:38.587 Malloc9 00:26:38.587 Malloc10 00:26:38.587 18:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:38.587 18:03:42 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:38.587 18:03:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:38.587 18:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.587 18:03:42 -- target/shutdown.sh@102 -- # perfpid=1790994 00:26:38.587 18:03:42 -- target/shutdown.sh@103 -- # waitforlisten 1790994 /var/tmp/bdevperf.sock 00:26:38.587 18:03:42 -- common/autotest_common.sh@819 -- # '[' -z 1790994 ']' 00:26:38.587 18:03:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.587 18:03:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:38.587 18:03:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.587 18:03:42 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:38.587 18:03:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:38.587 18:03:42 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:38.587 18:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:38.587 18:03:42 -- nvmf/common.sh@520 -- # config=() 00:26:38.587 18:03:42 -- nvmf/common.sh@520 -- # local subsystem config 00:26:38.587 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.587 { 00:26:38.587 "params": { 00:26:38.587 "name": "Nvme$subsystem", 00:26:38.587 "trtype": "$TEST_TRANSPORT", 00:26:38.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.587 "adrfam": "ipv4", 00:26:38.587 "trsvcid": "$NVMF_PORT", 00:26:38.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.587 "hdgst": ${hdgst:-false}, 00:26:38.587 "ddgst": ${ddgst:-false} 00:26:38.587 }, 00:26:38.587 "method": "bdev_nvme_attach_controller" 00:26:38.587 } 00:26:38.587 EOF 00:26:38.587 )") 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.587 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.587 { 00:26:38.587 "params": { 00:26:38.587 "name": "Nvme$subsystem", 00:26:38.587 "trtype": "$TEST_TRANSPORT", 00:26:38.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.587 "adrfam": "ipv4", 00:26:38.587 "trsvcid": "$NVMF_PORT", 00:26:38.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.587 "hdgst": ${hdgst:-false}, 00:26:38.587 "ddgst": ${ddgst:-false} 00:26:38.587 }, 00:26:38.587 "method": "bdev_nvme_attach_controller" 00:26:38.587 } 00:26:38.587 EOF 00:26:38.587 )") 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.587 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.587 { 00:26:38.587 "params": { 00:26:38.587 "name": "Nvme$subsystem", 00:26:38.587 "trtype": "$TEST_TRANSPORT", 00:26:38.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.587 "adrfam": "ipv4", 00:26:38.587 "trsvcid": "$NVMF_PORT", 00:26:38.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.587 "hdgst": ${hdgst:-false}, 00:26:38.587 "ddgst": ${ddgst:-false} 00:26:38.587 }, 00:26:38.587 "method": "bdev_nvme_attach_controller" 00:26:38.587 } 00:26:38.587 EOF 00:26:38.587 )") 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.587 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.587 { 00:26:38.587 "params": { 00:26:38.587 "name": "Nvme$subsystem", 00:26:38.587 "trtype": "$TEST_TRANSPORT", 00:26:38.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.587 "adrfam": "ipv4", 00:26:38.587 "trsvcid": "$NVMF_PORT", 00:26:38.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.587 "hdgst": ${hdgst:-false}, 00:26:38.587 "ddgst": ${ddgst:-false} 00:26:38.587 }, 00:26:38.587 "method": "bdev_nvme_attach_controller" 00:26:38.587 } 00:26:38.587 EOF 00:26:38.587 )") 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.587 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.587 { 00:26:38.587 "params": { 00:26:38.587 "name": "Nvme$subsystem", 00:26:38.587 "trtype": "$TEST_TRANSPORT", 00:26:38.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.587 "adrfam": "ipv4", 00:26:38.587 "trsvcid": "$NVMF_PORT", 00:26:38.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.587 "hdgst": ${hdgst:-false}, 00:26:38.587 "ddgst": ${ddgst:-false} 00:26:38.587 }, 00:26:38.587 "method": "bdev_nvme_attach_controller" 00:26:38.587 } 00:26:38.587 EOF 00:26:38.587 )") 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.587 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.587 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.587 { 00:26:38.587 "params": { 00:26:38.587 "name": "Nvme$subsystem", 00:26:38.588 "trtype": "$TEST_TRANSPORT", 00:26:38.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.588 "adrfam": "ipv4", 00:26:38.588 "trsvcid": "$NVMF_PORT", 00:26:38.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.588 "hdgst": ${hdgst:-false}, 00:26:38.588 "ddgst": ${ddgst:-false} 00:26:38.588 }, 00:26:38.588 "method": "bdev_nvme_attach_controller" 00:26:38.588 } 00:26:38.588 EOF 00:26:38.588 )") 00:26:38.588 [2024-07-22 18:03:42.844155] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:38.588 [2024-07-22 18:03:42.844206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790994 ] 00:26:38.588 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.588 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.588 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.588 { 00:26:38.588 "params": { 00:26:38.588 "name": "Nvme$subsystem", 00:26:38.588 "trtype": "$TEST_TRANSPORT", 00:26:38.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.588 "adrfam": "ipv4", 00:26:38.588 "trsvcid": "$NVMF_PORT", 00:26:38.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.588 "hdgst": ${hdgst:-false}, 00:26:38.588 "ddgst": ${ddgst:-false} 00:26:38.588 }, 00:26:38.588 "method": "bdev_nvme_attach_controller" 00:26:38.588 } 00:26:38.588 EOF 00:26:38.588 )") 00:26:38.588 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.588 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.588 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.588 { 00:26:38.588 "params": { 00:26:38.588 "name": "Nvme$subsystem", 00:26:38.588 "trtype": "$TEST_TRANSPORT", 00:26:38.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.588 "adrfam": "ipv4", 00:26:38.588 "trsvcid": "$NVMF_PORT", 00:26:38.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.588 "hdgst": ${hdgst:-false}, 00:26:38.588 "ddgst": ${ddgst:-false} 00:26:38.588 }, 00:26:38.588 "method": "bdev_nvme_attach_controller" 00:26:38.588 } 00:26:38.588 EOF 00:26:38.588 )") 00:26:38.588 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.850 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.850 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.850 { 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme$subsystem", 00:26:38.850 "trtype": "$TEST_TRANSPORT", 00:26:38.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "$NVMF_PORT", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.850 "hdgst": ${hdgst:-false}, 00:26:38.850 "ddgst": ${ddgst:-false} 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 } 00:26:38.850 EOF 00:26:38.850 )") 00:26:38.850 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.850 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.850 18:03:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.850 18:03:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.850 { 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme$subsystem", 00:26:38.850 "trtype": "$TEST_TRANSPORT", 00:26:38.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "$NVMF_PORT", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.850 "hdgst": ${hdgst:-false}, 00:26:38.850 "ddgst": ${ddgst:-false} 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 } 00:26:38.850 EOF 00:26:38.850 )") 00:26:38.850 18:03:42 -- nvmf/common.sh@542 -- # cat 00:26:38.850 18:03:42 -- nvmf/common.sh@544 -- # jq . 00:26:38.850 18:03:42 -- nvmf/common.sh@545 -- # IFS=, 00:26:38.850 18:03:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme1", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme2", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme3", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme4", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme5", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme6", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme7", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme8", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme9", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 },{ 00:26:38.850 "params": { 00:26:38.850 "name": "Nvme10", 00:26:38.850 "trtype": "tcp", 00:26:38.850 "traddr": "10.0.0.2", 00:26:38.850 "adrfam": "ipv4", 00:26:38.850 "trsvcid": "4420", 00:26:38.850 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:38.850 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:38.850 "hdgst": false, 00:26:38.850 "ddgst": false 00:26:38.850 }, 00:26:38.850 "method": "bdev_nvme_attach_controller" 00:26:38.850 }' 00:26:38.850 [2024-07-22 18:03:42.925708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.850 [2024-07-22 18:03:42.985942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.236 Running I/O for 10 seconds... 00:26:40.807 18:03:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:40.807 18:03:45 -- common/autotest_common.sh@852 -- # return 0 00:26:40.807 18:03:45 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:40.807 18:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:40.807 18:03:45 -- common/autotest_common.sh@10 -- # set +x 00:26:40.807 18:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:40.807 18:03:45 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:40.807 18:03:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:40.807 18:03:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:40.807 18:03:45 -- target/shutdown.sh@57 -- # local ret=1 00:26:40.807 18:03:45 -- target/shutdown.sh@58 -- # local i 00:26:40.807 18:03:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:40.807 18:03:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:41.069 18:03:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:41.069 18:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:41.069 18:03:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:41.069 18:03:45 -- common/autotest_common.sh@10 -- # set +x 00:26:41.069 18:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:41.069 18:03:45 -- target/shutdown.sh@60 -- # read_io_count=294 00:26:41.069 18:03:45 -- target/shutdown.sh@63 -- # '[' 294 -ge 100 ']' 00:26:41.069 18:03:45 -- target/shutdown.sh@64 -- # ret=0 00:26:41.069 18:03:45 -- target/shutdown.sh@65 -- # break 00:26:41.069 18:03:45 -- target/shutdown.sh@69 -- # return 0 00:26:41.069 18:03:45 -- target/shutdown.sh@109 -- # killprocess 1790994 00:26:41.069 18:03:45 -- common/autotest_common.sh@926 -- # '[' -z 1790994 ']' 00:26:41.069 18:03:45 -- common/autotest_common.sh@930 -- # kill -0 1790994 00:26:41.069 18:03:45 -- common/autotest_common.sh@931 -- # uname 00:26:41.069 18:03:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:41.069 18:03:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1790994 00:26:41.069 18:03:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:41.069 18:03:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:41.069 18:03:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1790994' 00:26:41.069 killing process with pid 1790994 00:26:41.069 18:03:45 -- common/autotest_common.sh@945 -- # kill 1790994 00:26:41.069 18:03:45 -- common/autotest_common.sh@950 -- # wait 1790994 00:26:41.069 Received shutdown signal, test time was about 0.865503 seconds 00:26:41.069 00:26:41.069 Latency(us) 00:26:41.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.069 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme1n1 : 0.84 475.98 29.75 0.00 0.00 131976.63 18551.73 129055.51 00:26:41.069 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme2n1 : 0.86 411.48 25.72 0.00 0.00 144044.01 19559.98 112116.97 00:26:41.069 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme3n1 : 0.83 426.92 26.68 0.00 0.00 144213.42 16031.11 141154.46 00:26:41.069 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme4n1 : 0.83 483.45 30.22 0.00 0.00 126418.69 17946.78 105664.20 00:26:41.069 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme5n1 : 0.84 473.58 29.60 0.00 0.00 127674.11 20164.92 115343.36 00:26:41.069 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme6n1 : 0.84 472.34 29.52 0.00 0.00 127374.80 16434.41 104051.00 00:26:41.069 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme7n1 : 0.84 415.71 25.98 0.00 0.00 142524.19 11040.30 167772.16 00:26:41.069 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme8n1 : 0.83 479.22 29.95 0.00 0.00 122930.79 18955.03 110503.78 00:26:41.069 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme9n1 : 0.83 477.03 29.81 0.00 0.00 122316.47 19459.15 100018.02 00:26:41.069 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:41.069 Verification LBA range: start 0x0 length 0x400 00:26:41.069 Nvme10n1 : 0.86 410.56 25.66 0.00 0.00 134301.80 12653.49 109697.18 00:26:41.069 =================================================================================================================== 00:26:41.069 Total : 4526.27 282.89 0.00 0.00 131956.24 11040.30 167772.16 00:26:41.329 18:03:45 -- target/shutdown.sh@112 -- # sleep 1 00:26:42.270 18:03:46 -- target/shutdown.sh@113 -- # kill -0 1790643 00:26:42.270 18:03:46 -- target/shutdown.sh@115 -- # stoptarget 00:26:42.270 18:03:46 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:42.270 18:03:46 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:42.270 18:03:46 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:42.270 18:03:46 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:42.270 18:03:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:42.270 18:03:46 -- nvmf/common.sh@116 -- # sync 00:26:42.270 18:03:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:42.270 18:03:46 -- nvmf/common.sh@119 -- # set +e 00:26:42.270 18:03:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:42.270 18:03:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:42.270 rmmod nvme_tcp 00:26:42.270 rmmod nvme_fabrics 00:26:42.270 rmmod nvme_keyring 00:26:42.270 18:03:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:42.270 18:03:46 -- nvmf/common.sh@123 -- # set -e 00:26:42.270 18:03:46 -- nvmf/common.sh@124 -- # return 0 00:26:42.270 18:03:46 -- nvmf/common.sh@477 -- # '[' -n 1790643 ']' 00:26:42.270 18:03:46 -- nvmf/common.sh@478 -- # killprocess 1790643 00:26:42.270 18:03:46 -- common/autotest_common.sh@926 -- # '[' -z 1790643 ']' 00:26:42.270 18:03:46 -- common/autotest_common.sh@930 -- # kill -0 1790643 00:26:42.270 18:03:46 -- common/autotest_common.sh@931 -- # uname 00:26:42.270 18:03:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:42.270 18:03:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1790643 00:26:42.270 18:03:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:42.270 18:03:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:42.270 18:03:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1790643' 00:26:42.270 killing process with pid 1790643 00:26:42.270 18:03:46 -- common/autotest_common.sh@945 -- # kill 1790643 00:26:42.270 18:03:46 -- common/autotest_common.sh@950 -- # wait 1790643 00:26:42.530 18:03:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:42.530 18:03:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:42.530 18:03:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:42.530 18:03:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.530 18:03:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:42.530 18:03:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.530 18:03:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.530 18:03:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.080 18:03:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:45.080 00:26:45.080 real 0m7.871s 00:26:45.080 user 0m23.975s 00:26:45.080 sys 0m1.290s 00:26:45.080 18:03:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.080 18:03:48 -- common/autotest_common.sh@10 -- # set +x 00:26:45.080 ************************************ 00:26:45.080 END TEST nvmf_shutdown_tc2 00:26:45.080 ************************************ 00:26:45.081 18:03:48 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:45.081 18:03:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:45.081 18:03:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:45.081 18:03:48 -- common/autotest_common.sh@10 -- # set +x 00:26:45.081 ************************************ 00:26:45.081 START TEST nvmf_shutdown_tc3 00:26:45.081 ************************************ 00:26:45.081 18:03:48 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:26:45.081 18:03:48 -- target/shutdown.sh@120 -- # starttarget 00:26:45.081 18:03:48 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:45.081 18:03:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:45.081 18:03:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.081 18:03:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:45.081 18:03:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:45.081 18:03:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:45.081 18:03:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.081 18:03:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.081 18:03:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.081 18:03:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:45.081 18:03:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:45.081 18:03:48 -- common/autotest_common.sh@10 -- # set +x 00:26:45.081 18:03:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:45.081 18:03:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:45.081 18:03:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:45.081 18:03:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:45.081 18:03:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:45.081 18:03:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:45.081 18:03:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:45.081 18:03:48 -- nvmf/common.sh@294 -- # net_devs=() 00:26:45.081 18:03:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:45.081 18:03:48 -- nvmf/common.sh@295 -- # e810=() 00:26:45.081 18:03:48 -- nvmf/common.sh@295 -- # local -ga e810 00:26:45.081 18:03:48 -- nvmf/common.sh@296 -- # x722=() 00:26:45.081 18:03:48 -- nvmf/common.sh@296 -- # local -ga x722 00:26:45.081 18:03:48 -- nvmf/common.sh@297 -- # mlx=() 00:26:45.081 18:03:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:45.081 18:03:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:45.081 18:03:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:45.081 18:03:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:45.081 18:03:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:45.081 18:03:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:45.081 18:03:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:45.081 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:45.081 18:03:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:45.081 18:03:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:45.081 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:45.081 18:03:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:45.081 18:03:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:45.081 18:03:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.081 18:03:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:45.081 18:03:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.081 18:03:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:45.081 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:45.081 18:03:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.081 18:03:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:45.081 18:03:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:45.081 18:03:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:45.081 18:03:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:45.081 18:03:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:45.081 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:45.081 18:03:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:45.081 18:03:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:45.081 18:03:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:45.081 18:03:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:45.081 18:03:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:45.081 18:03:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:45.081 18:03:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:45.081 18:03:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:45.081 18:03:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:45.081 18:03:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:45.081 18:03:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:45.081 18:03:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:45.081 18:03:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:45.081 18:03:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:45.081 18:03:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:45.081 18:03:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:45.081 18:03:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:45.081 18:03:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:45.081 18:03:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:45.081 18:03:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.081 18:03:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:45.081 18:03:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.081 18:03:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:45.081 18:03:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:45.081 18:03:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:45.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:45.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:26:45.081 00:26:45.081 --- 10.0.0.2 ping statistics --- 00:26:45.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.081 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:26:45.081 18:03:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:45.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:45.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:26:45.081 00:26:45.081 --- 10.0.0.1 ping statistics --- 00:26:45.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:45.081 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:26:45.081 18:03:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:45.081 18:03:49 -- nvmf/common.sh@410 -- # return 0 00:26:45.081 18:03:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:45.081 18:03:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:45.081 18:03:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:45.081 18:03:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:45.081 18:03:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:45.081 18:03:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:45.081 18:03:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:45.081 18:03:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:45.081 18:03:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:45.081 18:03:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:45.081 18:03:49 -- common/autotest_common.sh@10 -- # set +x 00:26:45.081 18:03:49 -- nvmf/common.sh@469 -- # nvmfpid=1792148 00:26:45.081 18:03:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:45.081 18:03:49 -- nvmf/common.sh@470 -- # waitforlisten 1792148 00:26:45.081 18:03:49 -- common/autotest_common.sh@819 -- # '[' -z 1792148 ']' 00:26:45.081 18:03:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.081 18:03:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:45.081 18:03:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.081 18:03:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:45.081 18:03:49 -- common/autotest_common.sh@10 -- # set +x 00:26:45.081 [2024-07-22 18:03:49.339413] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:45.081 [2024-07-22 18:03:49.339473] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:45.343 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.343 [2024-07-22 18:03:49.415763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:45.343 [2024-07-22 18:03:49.485005] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:45.343 [2024-07-22 18:03:49.485138] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:45.343 [2024-07-22 18:03:49.485148] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:45.343 [2024-07-22 18:03:49.485157] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:45.343 [2024-07-22 18:03:49.485278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:45.343 [2024-07-22 18:03:49.485398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:45.343 [2024-07-22 18:03:49.485763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:45.343 [2024-07-22 18:03:49.485764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.283 18:03:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:46.283 18:03:50 -- common/autotest_common.sh@852 -- # return 0 00:26:46.283 18:03:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:46.283 18:03:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:46.283 18:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.283 18:03:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.283 18:03:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:46.283 18:03:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:46.283 18:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.283 [2024-07-22 18:03:50.234743] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.283 18:03:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:46.283 18:03:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:46.283 18:03:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:46.283 18:03:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:46.283 18:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.283 18:03:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:46.283 18:03:50 -- target/shutdown.sh@28 -- # cat 00:26:46.283 18:03:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:46.283 18:03:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:46.283 18:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.283 Malloc1 00:26:46.283 [2024-07-22 18:03:50.337795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.283 Malloc2 00:26:46.283 Malloc3 00:26:46.283 Malloc4 00:26:46.284 Malloc5 00:26:46.284 Malloc6 00:26:46.284 Malloc7 00:26:46.544 Malloc8 00:26:46.544 Malloc9 00:26:46.544 Malloc10 00:26:46.544 18:03:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:46.544 18:03:50 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:46.544 18:03:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:46.544 18:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.544 18:03:50 -- target/shutdown.sh@124 -- # perfpid=1792411 00:26:46.544 18:03:50 -- target/shutdown.sh@125 -- # waitforlisten 1792411 /var/tmp/bdevperf.sock 00:26:46.544 18:03:50 -- common/autotest_common.sh@819 -- # '[' -z 1792411 ']' 00:26:46.544 18:03:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:46.544 18:03:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:46.544 18:03:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:46.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:46.544 18:03:50 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:46.544 18:03:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:46.544 18:03:50 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:46.544 18:03:50 -- common/autotest_common.sh@10 -- # set +x 00:26:46.544 18:03:50 -- nvmf/common.sh@520 -- # config=() 00:26:46.544 18:03:50 -- nvmf/common.sh@520 -- # local subsystem config 00:26:46.544 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.544 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.544 { 00:26:46.544 "params": { 00:26:46.544 "name": "Nvme$subsystem", 00:26:46.544 "trtype": "$TEST_TRANSPORT", 00:26:46.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.544 "adrfam": "ipv4", 00:26:46.544 "trsvcid": "$NVMF_PORT", 00:26:46.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.544 "hdgst": ${hdgst:-false}, 00:26:46.544 "ddgst": ${ddgst:-false} 00:26:46.544 }, 00:26:46.544 "method": "bdev_nvme_attach_controller" 00:26:46.544 } 00:26:46.544 EOF 00:26:46.544 )") 00:26:46.544 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.544 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.544 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.544 { 00:26:46.544 "params": { 00:26:46.544 "name": "Nvme$subsystem", 00:26:46.544 "trtype": "$TEST_TRANSPORT", 00:26:46.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.544 "adrfam": "ipv4", 00:26:46.544 "trsvcid": "$NVMF_PORT", 00:26:46.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.544 "hdgst": ${hdgst:-false}, 00:26:46.544 "ddgst": ${ddgst:-false} 00:26:46.544 }, 00:26:46.544 "method": "bdev_nvme_attach_controller" 00:26:46.544 } 00:26:46.544 EOF 00:26:46.544 )") 00:26:46.544 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 [2024-07-22 18:03:50.777609] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:46.545 [2024-07-22 18:03:50.777662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792411 ] 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.545 18:03:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:46.545 { 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme$subsystem", 00:26:46.545 "trtype": "$TEST_TRANSPORT", 00:26:46.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "$NVMF_PORT", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:46.545 "hdgst": ${hdgst:-false}, 00:26:46.545 "ddgst": ${ddgst:-false} 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 } 00:26:46.545 EOF 00:26:46.545 )") 00:26:46.545 18:03:50 -- nvmf/common.sh@542 -- # cat 00:26:46.545 18:03:50 -- nvmf/common.sh@544 -- # jq . 00:26:46.545 18:03:50 -- nvmf/common.sh@545 -- # IFS=, 00:26:46.545 18:03:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme1", 00:26:46.545 "trtype": "tcp", 00:26:46.545 "traddr": "10.0.0.2", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "4420", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:46.545 "hdgst": false, 00:26:46.545 "ddgst": false 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 },{ 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme2", 00:26:46.545 "trtype": "tcp", 00:26:46.545 "traddr": "10.0.0.2", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "4420", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:46.545 "hdgst": false, 00:26:46.545 "ddgst": false 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 },{ 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme3", 00:26:46.545 "trtype": "tcp", 00:26:46.545 "traddr": "10.0.0.2", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "4420", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:46.545 "hdgst": false, 00:26:46.545 "ddgst": false 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 },{ 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme4", 00:26:46.545 "trtype": "tcp", 00:26:46.545 "traddr": "10.0.0.2", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "4420", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:46.545 "hdgst": false, 00:26:46.545 "ddgst": false 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 },{ 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme5", 00:26:46.545 "trtype": "tcp", 00:26:46.545 "traddr": "10.0.0.2", 00:26:46.545 "adrfam": "ipv4", 00:26:46.545 "trsvcid": "4420", 00:26:46.545 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:46.545 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:46.545 "hdgst": false, 00:26:46.545 "ddgst": false 00:26:46.545 }, 00:26:46.545 "method": "bdev_nvme_attach_controller" 00:26:46.545 },{ 00:26:46.545 "params": { 00:26:46.545 "name": "Nvme6", 00:26:46.545 "trtype": "tcp", 00:26:46.545 "traddr": "10.0.0.2", 00:26:46.546 "adrfam": "ipv4", 00:26:46.546 "trsvcid": "4420", 00:26:46.546 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:46.546 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:46.546 "hdgst": false, 00:26:46.546 "ddgst": false 00:26:46.546 }, 00:26:46.546 "method": "bdev_nvme_attach_controller" 00:26:46.546 },{ 00:26:46.546 "params": { 00:26:46.546 "name": "Nvme7", 00:26:46.546 "trtype": "tcp", 00:26:46.546 "traddr": "10.0.0.2", 00:26:46.546 "adrfam": "ipv4", 00:26:46.546 "trsvcid": "4420", 00:26:46.546 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:46.546 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:46.546 "hdgst": false, 00:26:46.546 "ddgst": false 00:26:46.546 }, 00:26:46.546 "method": "bdev_nvme_attach_controller" 00:26:46.546 },{ 00:26:46.546 "params": { 00:26:46.546 "name": "Nvme8", 00:26:46.546 "trtype": "tcp", 00:26:46.546 "traddr": "10.0.0.2", 00:26:46.546 "adrfam": "ipv4", 00:26:46.546 "trsvcid": "4420", 00:26:46.546 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:46.546 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:46.546 "hdgst": false, 00:26:46.546 "ddgst": false 00:26:46.546 }, 00:26:46.546 "method": "bdev_nvme_attach_controller" 00:26:46.546 },{ 00:26:46.546 "params": { 00:26:46.546 "name": "Nvme9", 00:26:46.546 "trtype": "tcp", 00:26:46.546 "traddr": "10.0.0.2", 00:26:46.546 "adrfam": "ipv4", 00:26:46.546 "trsvcid": "4420", 00:26:46.546 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:46.546 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:46.546 "hdgst": false, 00:26:46.546 "ddgst": false 00:26:46.546 }, 00:26:46.546 "method": "bdev_nvme_attach_controller" 00:26:46.546 },{ 00:26:46.546 "params": { 00:26:46.546 "name": "Nvme10", 00:26:46.546 "trtype": "tcp", 00:26:46.546 "traddr": "10.0.0.2", 00:26:46.546 "adrfam": "ipv4", 00:26:46.546 "trsvcid": "4420", 00:26:46.546 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:46.546 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:46.546 "hdgst": false, 00:26:46.546 "ddgst": false 00:26:46.546 }, 00:26:46.546 "method": "bdev_nvme_attach_controller" 00:26:46.546 }' 00:26:46.806 [2024-07-22 18:03:50.857912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.806 [2024-07-22 18:03:50.918462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.190 Running I/O for 10 seconds... 00:26:48.190 18:03:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:48.190 18:03:52 -- common/autotest_common.sh@852 -- # return 0 00:26:48.190 18:03:52 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:48.190 18:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.190 18:03:52 -- common/autotest_common.sh@10 -- # set +x 00:26:48.190 18:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.190 18:03:52 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:48.190 18:03:52 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:48.190 18:03:52 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:48.190 18:03:52 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:26:48.190 18:03:52 -- target/shutdown.sh@57 -- # local ret=1 00:26:48.190 18:03:52 -- target/shutdown.sh@58 -- # local i 00:26:48.190 18:03:52 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:26:48.190 18:03:52 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:48.190 18:03:52 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:48.190 18:03:52 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:48.190 18:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.190 18:03:52 -- common/autotest_common.sh@10 -- # set +x 00:26:48.190 18:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.190 18:03:52 -- target/shutdown.sh@60 -- # read_io_count=87 00:26:48.190 18:03:52 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:26:48.190 18:03:52 -- target/shutdown.sh@67 -- # sleep 0.25 00:26:48.451 18:03:52 -- target/shutdown.sh@59 -- # (( i-- )) 00:26:48.451 18:03:52 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:26:48.451 18:03:52 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:48.451 18:03:52 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:26:48.451 18:03:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:48.451 18:03:52 -- common/autotest_common.sh@10 -- # set +x 00:26:48.729 18:03:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:48.729 18:03:52 -- target/shutdown.sh@60 -- # read_io_count=211 00:26:48.729 18:03:52 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:26:48.729 18:03:52 -- target/shutdown.sh@64 -- # ret=0 00:26:48.729 18:03:52 -- target/shutdown.sh@65 -- # break 00:26:48.729 18:03:52 -- target/shutdown.sh@69 -- # return 0 00:26:48.729 18:03:52 -- target/shutdown.sh@134 -- # killprocess 1792148 00:26:48.729 18:03:52 -- common/autotest_common.sh@926 -- # '[' -z 1792148 ']' 00:26:48.729 18:03:52 -- common/autotest_common.sh@930 -- # kill -0 1792148 00:26:48.729 18:03:52 -- common/autotest_common.sh@931 -- # uname 00:26:48.729 18:03:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:48.729 18:03:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1792148 00:26:48.729 18:03:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:48.729 18:03:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:48.729 18:03:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1792148' 00:26:48.729 killing process with pid 1792148 00:26:48.729 18:03:52 -- common/autotest_common.sh@945 -- # kill 1792148 00:26:48.729 18:03:52 -- common/autotest_common.sh@950 -- # wait 1792148 00:26:48.729 [2024-07-22 18:03:52.816562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.729 [2024-07-22 18:03:52.816901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.816961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21279e0 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.818608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212a370 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.730 [2024-07-22 18:03:52.819782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.819994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.820061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127e90 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.821991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.731 [2024-07-22 18:03:52.822448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.822986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.823005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.823023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.823046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.823065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128340 is same with the state(5) to be set 00:26:48.732 [2024-07-22 18:03:52.823074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.732 [2024-07-22 18:03:52.823418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.732 [2024-07-22 18:03:52.823427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.733 [2024-07-22 18:03:52.823960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.733 [2024-07-22 18:03:52.823967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.823975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.823981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.823989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.823996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.824011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.824025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.824040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.824057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.824071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.734 [2024-07-22 18:03:52.824086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824394] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2fe9e40 was disconnected and freed. reset controller. 00:26:48.734 [2024-07-22 18:03:52.824468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b240 is same with the state(5) to be set 00:26:48.734 [2024-07-22 18:03:52.824547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9f20 is same with the state(5) to be set 00:26:48.734 [2024-07-22 18:03:52.824655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c660 is same with the state(5) to be set 00:26:48.734 [2024-07-22 18:03:52.824751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.734 [2024-07-22 18:03:52.824800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.734 [2024-07-22 18:03:52.824807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df71c0 is same with the state(5) to be set 00:26:48.734 [2024-07-22 18:03:52.826573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.734 [2024-07-22 18:03:52.826590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.734 [2024-07-22 18:03:52.826595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.826871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2128c60 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.735 [2024-07-22 18:03:52.827574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.827852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129110 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.828999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.736 [2024-07-22 18:03:52.829369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.829987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.830005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295a0 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.831998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.737 [2024-07-22 18:03:52.832130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129a30 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.832964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.738 [2024-07-22 18:03:52.833950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:48.738 [2024-07-22 18:03:52.833983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1b240 (9): Bad file descriptor 00:26:48.738 [2024-07-22 18:03:52.834132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.738 [2024-07-22 18:03:52.834311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.738 [2024-07-22 18:03:52.834325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.739 [2024-07-22 18:03:52.834775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.739 [2024-07-22 18:03:52.834783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.834986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.834995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.740 [2024-07-22 18:03:52.835168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e8a930 is same with the state(5) to be set 00:26:48.740 [2024-07-22 18:03:52.835215] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e8a930 was disconnected and freed. reset controller. 00:26:48.740 [2024-07-22 18:03:52.835513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9f20 (9): Bad file descriptor 00:26:48.740 [2024-07-22 18:03:52.835550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9d80 is same with the state(5) to be set 00:26:48.740 [2024-07-22 18:03:52.835634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.740 [2024-07-22 18:03:52.835684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.740 [2024-07-22 18:03:52.835690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd250 is same with the state(5) to be set 00:26:48.741 [2024-07-22 18:03:52.835711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fadce0 is same with the state(5) to be set 00:26:48.741 [2024-07-22 18:03:52.835791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fad8e0 is same with the state(5) to be set 00:26:48.741 [2024-07-22 18:03:52.835864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5c660 (9): Bad file descriptor 00:26:48.741 [2024-07-22 18:03:52.835885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.741 [2024-07-22 18:03:52.835933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.835941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21b60 is same with the state(5) to be set 00:26:48.741 [2024-07-22 18:03:52.835956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df71c0 (9): Bad file descriptor 00:26:48.741 [2024-07-22 18:03:52.837212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.741 [2024-07-22 18:03:52.837580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.741 [2024-07-22 18:03:52.837586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.837988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.742 [2024-07-22 18:03:52.837999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.742 [2024-07-22 18:03:52.841962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.841981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.841988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.841994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.841999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.742 [2024-07-22 18:03:52.842054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.842143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2129ee0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.848988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.848995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.849063] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x279b9c0 was disconnected and freed. reset controller. 00:26:48.743 [2024-07-22 18:03:52.849527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:48.743 [2024-07-22 18:03:52.849561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb9d80 (9): Bad file descriptor 00:26:48.743 [2024-07-22 18:03:52.849897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.743 [2024-07-22 18:03:52.850194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.743 [2024-07-22 18:03:52.850204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1b240 with addr=10.0.0.2, port=4420 00:26:48.743 [2024-07-22 18:03:52.850212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b240 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.850255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.743 [2024-07-22 18:03:52.850265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.850272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.743 [2024-07-22 18:03:52.850280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.850288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.743 [2024-07-22 18:03:52.850294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.850301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.743 [2024-07-22 18:03:52.850312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.850318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2ed0 is same with the state(5) to be set 00:26:48.743 [2024-07-22 18:03:52.850334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd250 (9): Bad file descriptor 00:26:48.743 [2024-07-22 18:03:52.850354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fadce0 (9): Bad file descriptor 00:26:48.743 [2024-07-22 18:03:52.850367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fad8e0 (9): Bad file descriptor 00:26:48.743 [2024-07-22 18:03:52.850383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21b60 (9): Bad file descriptor 00:26:48.743 [2024-07-22 18:03:52.850399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1b240 (9): Bad file descriptor 00:26:48.743 [2024-07-22 18:03:52.851821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:48.743 [2024-07-22 18:03:52.851887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.851899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.743 [2024-07-22 18:03:52.851911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.743 [2024-07-22 18:03:52.851920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.851930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.851938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.851948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.851956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.851966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.851973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.851982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.851988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.851997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.744 [2024-07-22 18:03:52.852384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.744 [2024-07-22 18:03:52.852391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.852874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.854071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.854083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.854095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.854103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.854114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.854122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.745 [2024-07-22 18:03:52.854133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.745 [2024-07-22 18:03:52.854142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.746 [2024-07-22 18:03:52.854722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.746 [2024-07-22 18:03:52.854729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.854992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.854999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.855102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.855109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.747 [2024-07-22 18:03:52.856522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.747 [2024-07-22 18:03:52.856529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.856991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.856998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.748 [2024-07-22 18:03:52.857101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.748 [2024-07-22 18:03:52.857108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.857355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.749 [2024-07-22 18:03:52.857362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.749 [2024-07-22 18:03:52.858840] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:48.749 [2024-07-22 18:03:52.858880] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:48.749 [2024-07-22 18:03:52.858926] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:48.749 [2024-07-22 18:03:52.858987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.749 [2024-07-22 18:03:52.858999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:48.749 [2024-07-22 18:03:52.859008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:48.749 [2024-07-22 18:03:52.859586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.859829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.859843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb9d80 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-22 18:03:52.859852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9d80 is same with the state(5) to be set 00:26:48.749 [2024-07-22 18:03:52.860181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.860362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.860373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f21b60 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-22 18:03:52.860385] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21b60 is same with the state(5) to be set 00:26:48.749 [2024-07-22 18:03:52.860393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:48.749 [2024-07-22 18:03:52.860399] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:48.749 [2024-07-22 18:03:52.860407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:48.749 [2024-07-22 18:03:52.860824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.749 [2024-07-22 18:03:52.861024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.861183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.861193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df71c0 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-22 18:03:52.861200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df71c0 is same with the state(5) to be set 00:26:48.749 [2024-07-22 18:03:52.861468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.861716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.861725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5c660 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-22 18:03:52.861733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c660 is same with the state(5) to be set 00:26:48.749 [2024-07-22 18:03:52.861908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.861976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.749 [2024-07-22 18:03:52.861984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df9f20 with addr=10.0.0.2, port=4420 00:26:48.749 [2024-07-22 18:03:52.861991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9f20 is same with the state(5) to be set 00:26:48.749 [2024-07-22 18:03:52.862001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb9d80 (9): Bad file descriptor 00:26:48.749 [2024-07-22 18:03:52.862012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21b60 (9): Bad file descriptor 00:26:48.749 [2024-07-22 18:03:52.862025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec2ed0 (9): Bad file descriptor 00:26:48.750 [2024-07-22 18:03:52.862888] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:48.750 [2024-07-22 18:03:52.862930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df71c0 (9): Bad file descriptor 00:26:48.750 [2024-07-22 18:03:52.862941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5c660 (9): Bad file descriptor 00:26:48.750 [2024-07-22 18:03:52.862950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9f20 (9): Bad file descriptor 00:26:48.750 [2024-07-22 18:03:52.862958] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:48.750 [2024-07-22 18:03:52.862964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:48.750 [2024-07-22 18:03:52.862971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:48.750 [2024-07-22 18:03:52.862983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:48.750 [2024-07-22 18:03:52.862989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:48.750 [2024-07-22 18:03:52.862995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:48.750 [2024-07-22 18:03:52.863012] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.750 [2024-07-22 18:03:52.863022] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.750 [2024-07-22 18:03:52.863085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.750 [2024-07-22 18:03:52.863575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.750 [2024-07-22 18:03:52.863584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.863988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.863997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.864140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.864148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x293e690 is same with the state(5) to be set 00:26:48.751 [2024-07-22 18:03:52.865300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.865312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.751 [2024-07-22 18:03:52.865323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.751 [2024-07-22 18:03:52.865332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.752 [2024-07-22 18:03:52.865901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.752 [2024-07-22 18:03:52.865910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.865917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.865925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.865932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.865941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.865948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.865956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.865964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.865972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.865979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.865987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.865994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.866313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.866321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ae1340 is same with the state(5) to be set 00:26:48.753 [2024-07-22 18:03:52.867460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.753 [2024-07-22 18:03:52.867615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.753 [2024-07-22 18:03:52.867622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.867989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.867999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.868006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.868014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.868021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.868029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.868036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.868045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.868051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.868060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.754 [2024-07-22 18:03:52.868067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.754 [2024-07-22 18:03:52.868075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.755 [2024-07-22 18:03:52.868466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.755 [2024-07-22 18:03:52.868474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c84010 is same with the state(5) to be set 00:26:48.755 [2024-07-22 18:03:52.869620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:48.755 [2024-07-22 18:03:52.869636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.755 [2024-07-22 18:03:52.869644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.755 [2024-07-22 18:03:52.869652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:48.755 [2024-07-22 18:03:52.869662] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:48.755 [2024-07-22 18:03:52.869691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.755 [2024-07-22 18:03:52.869699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.755 [2024-07-22 18:03:52.869707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.755 [2024-07-22 18:03:52.869719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:48.755 [2024-07-22 18:03:52.869726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:48.755 [2024-07-22 18:03:52.869734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:48.755 [2024-07-22 18:03:52.869744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:48.755 [2024-07-22 18:03:52.869752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:48.755 [2024-07-22 18:03:52.869759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:48.755 [2024-07-22 18:03:52.869785] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.755 [2024-07-22 18:03:52.869800] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.755 [2024-07-22 18:03:52.869810] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.755 [2024-07-22 18:03:52.869873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:48.755 [2024-07-22 18:03:52.869883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.755 [2024-07-22 18:03:52.869894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.755 [2024-07-22 18:03:52.869903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.755 [2024-07-22 18:03:52.870107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.755 [2024-07-22 18:03:52.870400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.755 [2024-07-22 18:03:52.870410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1b240 with addr=10.0.0.2, port=4420 00:26:48.756 [2024-07-22 18:03:52.870417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b240 is same with the state(5) to be set 00:26:48.756 [2024-07-22 18:03:52.870735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.756 [2024-07-22 18:03:52.870944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.756 [2024-07-22 18:03:52.870953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbd250 with addr=10.0.0.2, port=4420 00:26:48.756 [2024-07-22 18:03:52.870960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd250 is same with the state(5) to be set 00:26:48.756 [2024-07-22 18:03:52.871119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.756 [2024-07-22 18:03:52.871441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.756 [2024-07-22 18:03:52.871450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fad8e0 with addr=10.0.0.2, port=4420 00:26:48.756 [2024-07-22 18:03:52.871457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fad8e0 is same with the state(5) to be set 00:26:48.756 [2024-07-22 18:03:52.872507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.756 [2024-07-22 18:03:52.872647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.756 [2024-07-22 18:03:52.872656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fadce0 with addr=10.0.0.2, port=4420 00:26:48.756 [2024-07-22 18:03:52.872663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fadce0 is same with the state(5) to be set 00:26:48.756 [2024-07-22 18:03:52.872672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1b240 (9): Bad file descriptor 00:26:48.756 [2024-07-22 18:03:52.872681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd250 (9): Bad file descriptor 00:26:48.756 [2024-07-22 18:03:52.872689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fad8e0 (9): Bad file descriptor 00:26:48.756 [2024-07-22 18:03:52.872742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fadce0 (9): Bad file descriptor 00:26:48.756 [2024-07-22 18:03:52.872751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:48.756 [2024-07-22 18:03:52.872757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:48.756 [2024-07-22 18:03:52.872764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:48.756 [2024-07-22 18:03:52.872773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:48.756 [2024-07-22 18:03:52.872779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:48.756 [2024-07-22 18:03:52.872788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:48.756 [2024-07-22 18:03:52.872798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:48.756 [2024-07-22 18:03:52.872804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:48.756 [2024-07-22 18:03:52.872810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:48.756 [2024-07-22 18:03:52.872853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.872987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.872996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.756 [2024-07-22 18:03:52.873221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.756 [2024-07-22 18:03:52.873230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.757 [2024-07-22 18:03:52.873778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.757 [2024-07-22 18:03:52.873786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.758 [2024-07-22 18:03:52.873793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.758 [2024-07-22 18:03:52.873803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.758 [2024-07-22 18:03:52.873809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.758 [2024-07-22 18:03:52.873818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.758 [2024-07-22 18:03:52.873824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.758 [2024-07-22 18:03:52.873833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.758 [2024-07-22 18:03:52.873839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.758 [2024-07-22 18:03:52.873847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e47130 is same with the state(5) to be set 00:26:48.758 [2024-07-22 18:03:52.875565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.758 [2024-07-22 18:03:52.875584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.758 [2024-07-22 18:03:52.875590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.758 task offset: 35072 on job bdev=Nvme10n1 fails 00:26:48.758 00:26:48.758 Latency(us) 00:26:48.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme1n1 ended in about 0.68 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme1n1 : 0.68 367.55 22.97 93.72 0.00 137739.87 51622.20 150833.62 00:26:48.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme2n1 ended in about 0.69 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme2n1 : 0.69 367.81 22.99 93.41 0.00 136265.42 15426.17 126635.72 00:26:48.758 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme3n1 ended in about 0.69 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme3n1 : 0.69 365.16 22.82 93.11 0.00 135660.64 70173.93 138734.67 00:26:48.758 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme4n1 ended in about 0.67 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme4n1 : 0.67 442.84 27.68 96.07 0.00 113907.66 3932.16 116956.55 00:26:48.758 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme5n1 ended in about 0.68 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme5n1 : 0.68 427.58 26.72 94.04 0.00 116469.19 27827.59 95985.03 00:26:48.758 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme6n1 ended in about 0.69 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme6n1 : 0.69 361.59 22.60 92.20 0.00 132518.75 73803.62 108083.99 00:26:48.758 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme7n1 ended in about 0.70 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme7n1 : 0.70 360.47 22.53 91.91 0.00 131439.45 65737.65 131475.30 00:26:48.758 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme8n1 ended in about 0.70 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme8n1 : 0.70 359.37 22.46 91.63 0.00 130348.62 63317.86 124215.93 00:26:48.758 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme9n1 ended in about 0.70 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme9n1 : 0.70 356.63 22.29 90.93 0.00 129922.28 65334.35 119376.34 00:26:48.758 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:48.758 Job: Nvme10n1 ended in about 0.66 seconds with error 00:26:48.758 Verification LBA range: start 0x0 length 0x400 00:26:48.758 Nvme10n1 : 0.66 380.14 23.76 96.54 0.00 119738.16 10435.35 107277.39 00:26:48.758 =================================================================================================================== 00:26:48.758 Total : 3789.14 236.82 933.57 0.00 128056.20 3932.16 150833.62 00:26:48.758 [2024-07-22 18:03:52.898234] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:48.758 [2024-07-22 18:03:52.898280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:26:48.758 [2024-07-22 18:03:52.898315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:48.758 [2024-07-22 18:03:52.898322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:48.758 [2024-07-22 18:03:52.898330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:48.758 [2024-07-22 18:03:52.898437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.758 [2024-07-22 18:03:52.898617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.898969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.898979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec2ed0 with addr=10.0.0.2, port=4420 00:26:48.758 [2024-07-22 18:03:52.898988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2ed0 is same with the state(5) to be set 00:26:48.758 [2024-07-22 18:03:52.899040] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.758 [2024-07-22 18:03:52.899051] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:48.758 [2024-07-22 18:03:52.899327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec2ed0 (9): Bad file descriptor 00:26:48.758 [2024-07-22 18:03:52.899441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:26:48.758 [2024-07-22 18:03:52.899829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.900118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.900127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f21b60 with addr=10.0.0.2, port=4420 00:26:48.758 [2024-07-22 18:03:52.900134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f21b60 is same with the state(5) to be set 00:26:48.758 [2024-07-22 18:03:52.900311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.900544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.900554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb9d80 with addr=10.0.0.2, port=4420 00:26:48.758 [2024-07-22 18:03:52.900561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9d80 is same with the state(5) to be set 00:26:48.758 [2024-07-22 18:03:52.900567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:26:48.758 [2024-07-22 18:03:52.900573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:26:48.758 [2024-07-22 18:03:52.900580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:26:48.758 [2024-07-22 18:03:52.900610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:26:48.758 [2024-07-22 18:03:52.900619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:26:48.758 [2024-07-22 18:03:52.900636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.758 [2024-07-22 18:03:52.900842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.901024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.901033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df9f20 with addr=10.0.0.2, port=4420 00:26:48.758 [2024-07-22 18:03:52.901040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9f20 is same with the state(5) to be set 00:26:48.758 [2024-07-22 18:03:52.901239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.758 [2024-07-22 18:03:52.901434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.901443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d5c660 with addr=10.0.0.2, port=4420 00:26:48.759 [2024-07-22 18:03:52.901449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5c660 is same with the state(5) to be set 00:26:48.759 [2024-07-22 18:03:52.901549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.901877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.901885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df71c0 with addr=10.0.0.2, port=4420 00:26:48.759 [2024-07-22 18:03:52.901892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df71c0 is same with the state(5) to be set 00:26:48.759 [2024-07-22 18:03:52.902198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.902358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.902368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fad8e0 with addr=10.0.0.2, port=4420 00:26:48.759 [2024-07-22 18:03:52.902374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fad8e0 is same with the state(5) to be set 00:26:48.759 [2024-07-22 18:03:52.902584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.902820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.902828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbd250 with addr=10.0.0.2, port=4420 00:26:48.759 [2024-07-22 18:03:52.902835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd250 is same with the state(5) to be set 00:26:48.759 [2024-07-22 18:03:52.902844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f21b60 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.902853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb9d80 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.903317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.903325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1b240 with addr=10.0.0.2, port=4420 00:26:48.759 [2024-07-22 18:03:52.903332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1b240 is same with the state(5) to be set 00:26:48.759 [2024-07-22 18:03:52.903626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.903794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.759 [2024-07-22 18:03:52.903804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fadce0 with addr=10.0.0.2, port=4420 00:26:48.759 [2024-07-22 18:03:52.903810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fadce0 is same with the state(5) to be set 00:26:48.759 [2024-07-22 18:03:52.903818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9f20 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903827] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5c660 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df71c0 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fad8e0 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd250 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.903865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.903871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:26:48.759 [2024-07-22 18:03:52.903879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.903885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.903891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:26:48.759 [2024-07-22 18:03:52.903927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.903935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.903942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1b240 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fadce0 (9): Bad file descriptor 00:26:48.759 [2024-07-22 18:03:52.903958] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.903963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.903970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:26:48.759 [2024-07-22 18:03:52.903978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.903984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.903990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:48.759 [2024-07-22 18:03:52.903999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.904005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.904014] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.759 [2024-07-22 18:03:52.904023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.904029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.904036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:26:48.759 [2024-07-22 18:03:52.904044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.904050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.904056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:26:48.759 [2024-07-22 18:03:52.904083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.904090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.904095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.904101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.904107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.904112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.904118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.904124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:26:48.759 [2024-07-22 18:03:52.904133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:26:48.759 [2024-07-22 18:03:52.904138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:26:48.759 [2024-07-22 18:03:52.904145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:26:48.759 [2024-07-22 18:03:52.904168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.759 [2024-07-22 18:03:52.904175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:49.020 18:03:53 -- target/shutdown.sh@135 -- # nvmfpid= 00:26:49.020 18:03:53 -- target/shutdown.sh@138 -- # sleep 1 00:26:49.961 18:03:54 -- target/shutdown.sh@141 -- # kill -9 1792411 00:26:49.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1792411) - No such process 00:26:49.961 18:03:54 -- target/shutdown.sh@141 -- # true 00:26:49.961 18:03:54 -- target/shutdown.sh@143 -- # stoptarget 00:26:49.961 18:03:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:26:49.961 18:03:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:49.961 18:03:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:49.961 18:03:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:26:49.961 18:03:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:49.961 18:03:54 -- nvmf/common.sh@116 -- # sync 00:26:49.961 18:03:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:49.961 18:03:54 -- nvmf/common.sh@119 -- # set +e 00:26:49.961 18:03:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:49.961 18:03:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:49.961 rmmod nvme_tcp 00:26:49.961 rmmod nvme_fabrics 00:26:49.961 rmmod nvme_keyring 00:26:49.961 18:03:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:49.961 18:03:54 -- nvmf/common.sh@123 -- # set -e 00:26:49.961 18:03:54 -- nvmf/common.sh@124 -- # return 0 00:26:49.961 18:03:54 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:26:49.961 18:03:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:49.961 18:03:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:49.961 18:03:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:49.961 18:03:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.961 18:03:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:49.961 18:03:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.962 18:03:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.962 18:03:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.511 18:03:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:52.511 00:26:52.511 real 0m7.387s 00:26:52.511 user 0m17.075s 00:26:52.511 sys 0m1.214s 00:26:52.511 18:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.511 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:26:52.511 ************************************ 00:26:52.511 END TEST nvmf_shutdown_tc3 00:26:52.511 ************************************ 00:26:52.511 18:03:56 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:26:52.511 00:26:52.511 real 0m32.292s 00:26:52.511 user 1m12.660s 00:26:52.511 sys 0m10.047s 00:26:52.511 18:03:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.511 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:26:52.511 ************************************ 00:26:52.511 END TEST nvmf_shutdown 00:26:52.511 ************************************ 00:26:52.511 18:03:56 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:26:52.511 18:03:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:52.511 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:26:52.512 18:03:56 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:26:52.512 18:03:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:52.512 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:26:52.512 18:03:56 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:26:52.512 18:03:56 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:52.512 18:03:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:52.512 18:03:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:52.512 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:26:52.512 ************************************ 00:26:52.512 START TEST nvmf_multicontroller 00:26:52.512 ************************************ 00:26:52.512 18:03:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:52.512 * Looking for test storage... 00:26:52.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.512 18:03:56 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.512 18:03:56 -- nvmf/common.sh@7 -- # uname -s 00:26:52.512 18:03:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.512 18:03:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.512 18:03:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.512 18:03:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.512 18:03:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.512 18:03:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.512 18:03:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.512 18:03:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.512 18:03:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.512 18:03:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.512 18:03:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:52.512 18:03:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:26:52.512 18:03:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.512 18:03:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.512 18:03:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.512 18:03:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.512 18:03:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.512 18:03:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.512 18:03:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.512 18:03:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.512 18:03:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.512 18:03:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.512 18:03:56 -- paths/export.sh@5 -- # export PATH 00:26:52.512 18:03:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.512 18:03:56 -- nvmf/common.sh@46 -- # : 0 00:26:52.512 18:03:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:52.512 18:03:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:52.512 18:03:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:52.512 18:03:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.512 18:03:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.512 18:03:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:52.512 18:03:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:52.512 18:03:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:52.512 18:03:56 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:52.512 18:03:56 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:52.512 18:03:56 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:52.512 18:03:56 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:52.512 18:03:56 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:52.512 18:03:56 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:52.512 18:03:56 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:52.512 18:03:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:52.512 18:03:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.512 18:03:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:52.512 18:03:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:52.512 18:03:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:52.512 18:03:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.512 18:03:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.512 18:03:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.512 18:03:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:52.512 18:03:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:52.512 18:03:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:52.512 18:03:56 -- common/autotest_common.sh@10 -- # set +x 00:27:00.772 18:04:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:00.772 18:04:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:00.772 18:04:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:00.773 18:04:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:00.773 18:04:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:00.773 18:04:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:00.773 18:04:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:00.773 18:04:04 -- nvmf/common.sh@294 -- # net_devs=() 00:27:00.773 18:04:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:00.773 18:04:04 -- nvmf/common.sh@295 -- # e810=() 00:27:00.773 18:04:04 -- nvmf/common.sh@295 -- # local -ga e810 00:27:00.773 18:04:04 -- nvmf/common.sh@296 -- # x722=() 00:27:00.773 18:04:04 -- nvmf/common.sh@296 -- # local -ga x722 00:27:00.773 18:04:04 -- nvmf/common.sh@297 -- # mlx=() 00:27:00.773 18:04:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:00.773 18:04:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.773 18:04:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:00.773 18:04:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:00.773 18:04:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:00.773 18:04:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:00.773 18:04:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:00.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:00.773 18:04:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:00.773 18:04:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:00.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:00.773 18:04:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:00.773 18:04:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:00.773 18:04:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.773 18:04:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:00.773 18:04:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.773 18:04:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:00.773 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:00.773 18:04:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.773 18:04:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:00.773 18:04:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.773 18:04:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:00.773 18:04:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.773 18:04:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:00.773 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:00.773 18:04:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.773 18:04:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:00.773 18:04:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:00.773 18:04:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:00.773 18:04:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.773 18:04:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.773 18:04:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.773 18:04:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:00.773 18:04:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.773 18:04:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.773 18:04:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:00.773 18:04:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.773 18:04:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.773 18:04:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:00.773 18:04:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:00.773 18:04:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.773 18:04:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.773 18:04:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.773 18:04:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.773 18:04:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:00.773 18:04:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.773 18:04:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.773 18:04:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.773 18:04:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:00.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:27:00.773 00:27:00.773 --- 10.0.0.2 ping statistics --- 00:27:00.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.773 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:27:00.773 18:04:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:27:00.773 00:27:00.773 --- 10.0.0.1 ping statistics --- 00:27:00.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.773 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:27:00.773 18:04:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.773 18:04:04 -- nvmf/common.sh@410 -- # return 0 00:27:00.773 18:04:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:00.773 18:04:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.773 18:04:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:00.773 18:04:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.773 18:04:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:00.773 18:04:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:00.773 18:04:04 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:00.773 18:04:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:00.773 18:04:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:00.773 18:04:04 -- common/autotest_common.sh@10 -- # set +x 00:27:00.773 18:04:04 -- nvmf/common.sh@469 -- # nvmfpid=1797591 00:27:00.773 18:04:04 -- nvmf/common.sh@470 -- # waitforlisten 1797591 00:27:00.773 18:04:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:00.773 18:04:04 -- common/autotest_common.sh@819 -- # '[' -z 1797591 ']' 00:27:00.773 18:04:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.773 18:04:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:00.773 18:04:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.773 18:04:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:00.773 18:04:04 -- common/autotest_common.sh@10 -- # set +x 00:27:00.773 [2024-07-22 18:04:04.871472] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:00.773 [2024-07-22 18:04:04.871518] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.773 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.774 [2024-07-22 18:04:04.940592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:00.774 [2024-07-22 18:04:05.001112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:00.774 [2024-07-22 18:04:05.001238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.774 [2024-07-22 18:04:05.001246] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.774 [2024-07-22 18:04:05.001252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.774 [2024-07-22 18:04:05.001358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.774 [2024-07-22 18:04:05.001486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.774 [2024-07-22 18:04:05.001578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.714 18:04:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:01.714 18:04:05 -- common/autotest_common.sh@852 -- # return 0 00:27:01.714 18:04:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:01.714 18:04:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 18:04:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.714 18:04:05 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 [2024-07-22 18:04:05.763062] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 Malloc0 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 [2024-07-22 18:04:05.833172] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 [2024-07-22 18:04:05.845118] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 Malloc1 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.714 18:04:05 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:01.714 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.714 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.714 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.715 18:04:05 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:01.715 18:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:01.715 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:01.715 18:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:01.715 18:04:05 -- host/multicontroller.sh@44 -- # bdevperf_pid=1797647 00:27:01.715 18:04:05 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:01.715 18:04:05 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:01.715 18:04:05 -- host/multicontroller.sh@47 -- # waitforlisten 1797647 /var/tmp/bdevperf.sock 00:27:01.715 18:04:05 -- common/autotest_common.sh@819 -- # '[' -z 1797647 ']' 00:27:01.715 18:04:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:01.715 18:04:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:01.715 18:04:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:01.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:01.715 18:04:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:01.715 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:27:02.654 18:04:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:02.654 18:04:06 -- common/autotest_common.sh@852 -- # return 0 00:27:02.654 18:04:06 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:02.654 18:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.654 18:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 NVMe0n1 00:27:02.915 18:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.915 18:04:06 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:02.915 18:04:06 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:02.915 18:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.915 18:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 18:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.915 1 00:27:02.915 18:04:06 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:02.915 18:04:06 -- common/autotest_common.sh@640 -- # local es=0 00:27:02.915 18:04:06 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:02.915 18:04:06 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:02.915 18:04:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.915 18:04:06 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:02.915 18:04:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.915 18:04:06 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:02.915 18:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.915 18:04:06 -- common/autotest_common.sh@10 -- # set +x 00:27:02.915 request: 00:27:02.915 { 00:27:02.915 "name": "NVMe0", 00:27:02.915 "trtype": "tcp", 00:27:02.915 "traddr": "10.0.0.2", 00:27:02.915 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:02.915 "hostaddr": "10.0.0.2", 00:27:02.915 "hostsvcid": "60000", 00:27:02.915 "adrfam": "ipv4", 00:27:02.915 "trsvcid": "4420", 00:27:02.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.915 "method": "bdev_nvme_attach_controller", 00:27:02.915 "req_id": 1 00:27:02.915 } 00:27:02.915 Got JSON-RPC error response 00:27:02.915 response: 00:27:02.915 { 00:27:02.915 "code": -114, 00:27:02.916 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:02.916 } 00:27:02.916 18:04:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # es=1 00:27:02.916 18:04:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:02.916 18:04:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:02.916 18:04:07 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:02.916 18:04:07 -- common/autotest_common.sh@640 -- # local es=0 00:27:02.916 18:04:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:02.916 18:04:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:02.916 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:02.916 request: 00:27:02.916 { 00:27:02.916 "name": "NVMe0", 00:27:02.916 "trtype": "tcp", 00:27:02.916 "traddr": "10.0.0.2", 00:27:02.916 "hostaddr": "10.0.0.2", 00:27:02.916 "hostsvcid": "60000", 00:27:02.916 "adrfam": "ipv4", 00:27:02.916 "trsvcid": "4420", 00:27:02.916 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:02.916 "method": "bdev_nvme_attach_controller", 00:27:02.916 "req_id": 1 00:27:02.916 } 00:27:02.916 Got JSON-RPC error response 00:27:02.916 response: 00:27:02.916 { 00:27:02.916 "code": -114, 00:27:02.916 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:02.916 } 00:27:02.916 18:04:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # es=1 00:27:02.916 18:04:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:02.916 18:04:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:02.916 18:04:07 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@640 -- # local es=0 00:27:02.916 18:04:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:02.916 request: 00:27:02.916 { 00:27:02.916 "name": "NVMe0", 00:27:02.916 "trtype": "tcp", 00:27:02.916 "traddr": "10.0.0.2", 00:27:02.916 "hostaddr": "10.0.0.2", 00:27:02.916 "hostsvcid": "60000", 00:27:02.916 "adrfam": "ipv4", 00:27:02.916 "trsvcid": "4420", 00:27:02.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.916 "multipath": "disable", 00:27:02.916 "method": "bdev_nvme_attach_controller", 00:27:02.916 "req_id": 1 00:27:02.916 } 00:27:02.916 Got JSON-RPC error response 00:27:02.916 response: 00:27:02.916 { 00:27:02.916 "code": -114, 00:27:02.916 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:02.916 } 00:27:02.916 18:04:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # es=1 00:27:02.916 18:04:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:02.916 18:04:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:02.916 18:04:07 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:02.916 18:04:07 -- common/autotest_common.sh@640 -- # local es=0 00:27:02.916 18:04:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:02.916 18:04:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:02.916 18:04:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:02.916 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:02.916 request: 00:27:02.916 { 00:27:02.916 "name": "NVMe0", 00:27:02.916 "trtype": "tcp", 00:27:02.916 "traddr": "10.0.0.2", 00:27:02.916 "hostaddr": "10.0.0.2", 00:27:02.916 "hostsvcid": "60000", 00:27:02.916 "adrfam": "ipv4", 00:27:02.916 "trsvcid": "4420", 00:27:02.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.916 "multipath": "failover", 00:27:02.916 "method": "bdev_nvme_attach_controller", 00:27:02.916 "req_id": 1 00:27:02.916 } 00:27:02.916 Got JSON-RPC error response 00:27:02.916 response: 00:27:02.916 { 00:27:02.916 "code": -114, 00:27:02.916 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:02.916 } 00:27:02.916 18:04:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@643 -- # es=1 00:27:02.916 18:04:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:02.916 18:04:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:02.916 18:04:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:02.916 18:04:07 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:02.916 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:02.916 00:27:02.916 18:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.916 18:04:07 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:02.916 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:02.916 18:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:02.916 18:04:07 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:02.916 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:02.916 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.175 00:27:03.175 18:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.175 18:04:07 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.175 18:04:07 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:03.175 18:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:03.175 18:04:07 -- common/autotest_common.sh@10 -- # set +x 00:27:03.175 18:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:03.175 18:04:07 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:03.175 18:04:07 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:04.553 0 00:27:04.553 18:04:08 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:04.553 18:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.553 18:04:08 -- common/autotest_common.sh@10 -- # set +x 00:27:04.553 18:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.553 18:04:08 -- host/multicontroller.sh@100 -- # killprocess 1797647 00:27:04.553 18:04:08 -- common/autotest_common.sh@926 -- # '[' -z 1797647 ']' 00:27:04.553 18:04:08 -- common/autotest_common.sh@930 -- # kill -0 1797647 00:27:04.553 18:04:08 -- common/autotest_common.sh@931 -- # uname 00:27:04.553 18:04:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:04.553 18:04:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1797647 00:27:04.553 18:04:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:04.553 18:04:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:04.553 18:04:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1797647' 00:27:04.553 killing process with pid 1797647 00:27:04.553 18:04:08 -- common/autotest_common.sh@945 -- # kill 1797647 00:27:04.553 18:04:08 -- common/autotest_common.sh@950 -- # wait 1797647 00:27:04.553 18:04:08 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.553 18:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.553 18:04:08 -- common/autotest_common.sh@10 -- # set +x 00:27:04.553 18:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.553 18:04:08 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:04.553 18:04:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.553 18:04:08 -- common/autotest_common.sh@10 -- # set +x 00:27:04.553 18:04:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.553 18:04:08 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:04.553 18:04:08 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:04.553 18:04:08 -- common/autotest_common.sh@1597 -- # read -r file 00:27:04.553 18:04:08 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:04.553 18:04:08 -- common/autotest_common.sh@1596 -- # sort -u 00:27:04.553 18:04:08 -- common/autotest_common.sh@1598 -- # cat 00:27:04.553 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:04.553 [2024-07-22 18:04:05.968840] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:04.553 [2024-07-22 18:04:05.968914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797647 ] 00:27:04.553 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.553 [2024-07-22 18:04:06.051486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.553 [2024-07-22 18:04:06.110816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.553 [2024-07-22 18:04:07.387003] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name f394d6e8-d6c7-4be5-87e4-ea3849442359 already exists 00:27:04.553 [2024-07-22 18:04:07.387033] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:f394d6e8-d6c7-4be5-87e4-ea3849442359 alias for bdev NVMe1n1 00:27:04.553 [2024-07-22 18:04:07.387044] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:04.553 Running I/O for 1 seconds... 00:27:04.553 00:27:04.553 Latency(us) 00:27:04.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.553 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:04.553 NVMe0n1 : 1.00 22656.21 88.50 0.00 0.00 5636.34 3881.75 15829.46 00:27:04.553 =================================================================================================================== 00:27:04.553 Total : 22656.21 88.50 0.00 0.00 5636.34 3881.75 15829.46 00:27:04.553 Received shutdown signal, test time was about 1.000000 seconds 00:27:04.553 00:27:04.553 Latency(us) 00:27:04.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.553 =================================================================================================================== 00:27:04.553 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:04.553 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:04.553 18:04:08 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:04.553 18:04:08 -- common/autotest_common.sh@1597 -- # read -r file 00:27:04.553 18:04:08 -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:04.553 18:04:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:04.553 18:04:08 -- nvmf/common.sh@116 -- # sync 00:27:04.553 18:04:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:04.553 18:04:08 -- nvmf/common.sh@119 -- # set +e 00:27:04.553 18:04:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:04.553 18:04:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:04.553 rmmod nvme_tcp 00:27:04.553 rmmod nvme_fabrics 00:27:04.553 rmmod nvme_keyring 00:27:04.812 18:04:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:04.812 18:04:08 -- nvmf/common.sh@123 -- # set -e 00:27:04.812 18:04:08 -- nvmf/common.sh@124 -- # return 0 00:27:04.812 18:04:08 -- nvmf/common.sh@477 -- # '[' -n 1797591 ']' 00:27:04.812 18:04:08 -- nvmf/common.sh@478 -- # killprocess 1797591 00:27:04.812 18:04:08 -- common/autotest_common.sh@926 -- # '[' -z 1797591 ']' 00:27:04.812 18:04:08 -- common/autotest_common.sh@930 -- # kill -0 1797591 00:27:04.812 18:04:08 -- common/autotest_common.sh@931 -- # uname 00:27:04.812 18:04:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:04.812 18:04:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1797591 00:27:04.812 18:04:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:04.812 18:04:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:04.812 18:04:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1797591' 00:27:04.812 killing process with pid 1797591 00:27:04.812 18:04:08 -- common/autotest_common.sh@945 -- # kill 1797591 00:27:04.812 18:04:08 -- common/autotest_common.sh@950 -- # wait 1797591 00:27:04.813 18:04:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:04.813 18:04:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:04.813 18:04:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:04.813 18:04:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.813 18:04:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:04.813 18:04:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.813 18:04:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.813 18:04:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.354 18:04:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:07.354 00:27:07.354 real 0m14.692s 00:27:07.354 user 0m17.595s 00:27:07.354 sys 0m6.848s 00:27:07.354 18:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.354 18:04:11 -- common/autotest_common.sh@10 -- # set +x 00:27:07.354 ************************************ 00:27:07.354 END TEST nvmf_multicontroller 00:27:07.354 ************************************ 00:27:07.354 18:04:11 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:07.354 18:04:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:07.354 18:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.354 18:04:11 -- common/autotest_common.sh@10 -- # set +x 00:27:07.354 ************************************ 00:27:07.354 START TEST nvmf_aer 00:27:07.354 ************************************ 00:27:07.354 18:04:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:07.354 * Looking for test storage... 00:27:07.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.354 18:04:11 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.354 18:04:11 -- nvmf/common.sh@7 -- # uname -s 00:27:07.354 18:04:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.354 18:04:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.354 18:04:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.355 18:04:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.355 18:04:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.355 18:04:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.355 18:04:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.355 18:04:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.355 18:04:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.355 18:04:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.355 18:04:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:07.355 18:04:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:07.355 18:04:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.355 18:04:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.355 18:04:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.355 18:04:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.355 18:04:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.355 18:04:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.355 18:04:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.355 18:04:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.355 18:04:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.355 18:04:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.355 18:04:11 -- paths/export.sh@5 -- # export PATH 00:27:07.355 18:04:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.355 18:04:11 -- nvmf/common.sh@46 -- # : 0 00:27:07.355 18:04:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:07.355 18:04:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:07.355 18:04:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:07.355 18:04:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.355 18:04:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.355 18:04:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:07.355 18:04:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:07.355 18:04:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:07.355 18:04:11 -- host/aer.sh@11 -- # nvmftestinit 00:27:07.355 18:04:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:07.355 18:04:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.355 18:04:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:07.355 18:04:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:07.355 18:04:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:07.355 18:04:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.355 18:04:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:07.355 18:04:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.355 18:04:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:07.355 18:04:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:07.355 18:04:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:07.355 18:04:11 -- common/autotest_common.sh@10 -- # set +x 00:27:15.495 18:04:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:15.495 18:04:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:15.495 18:04:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:15.495 18:04:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:15.495 18:04:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:15.495 18:04:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:15.495 18:04:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:15.495 18:04:19 -- nvmf/common.sh@294 -- # net_devs=() 00:27:15.495 18:04:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:15.495 18:04:19 -- nvmf/common.sh@295 -- # e810=() 00:27:15.495 18:04:19 -- nvmf/common.sh@295 -- # local -ga e810 00:27:15.495 18:04:19 -- nvmf/common.sh@296 -- # x722=() 00:27:15.495 18:04:19 -- nvmf/common.sh@296 -- # local -ga x722 00:27:15.495 18:04:19 -- nvmf/common.sh@297 -- # mlx=() 00:27:15.495 18:04:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:15.495 18:04:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.495 18:04:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:15.495 18:04:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:15.495 18:04:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:15.495 18:04:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:15.495 18:04:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:15.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:15.495 18:04:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:15.495 18:04:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:15.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:15.495 18:04:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:15.495 18:04:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:15.495 18:04:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.495 18:04:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:15.495 18:04:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.495 18:04:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:15.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:15.495 18:04:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.495 18:04:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:15.495 18:04:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.495 18:04:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:15.495 18:04:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.495 18:04:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:15.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:15.495 18:04:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.495 18:04:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:15.495 18:04:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:15.495 18:04:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:15.495 18:04:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.495 18:04:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.495 18:04:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.495 18:04:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:15.495 18:04:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.495 18:04:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.495 18:04:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:15.495 18:04:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.495 18:04:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.495 18:04:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:15.495 18:04:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:15.495 18:04:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.495 18:04:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.495 18:04:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.495 18:04:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.495 18:04:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:15.495 18:04:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.495 18:04:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.495 18:04:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.495 18:04:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:15.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.526 ms 00:27:15.495 00:27:15.495 --- 10.0.0.2 ping statistics --- 00:27:15.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.495 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:27:15.495 18:04:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:27:15.495 00:27:15.495 --- 10.0.0.1 ping statistics --- 00:27:15.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.495 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:27:15.495 18:04:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.495 18:04:19 -- nvmf/common.sh@410 -- # return 0 00:27:15.495 18:04:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:15.495 18:04:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.495 18:04:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:15.495 18:04:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.495 18:04:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:15.495 18:04:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:15.495 18:04:19 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:15.495 18:04:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:15.495 18:04:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:15.495 18:04:19 -- common/autotest_common.sh@10 -- # set +x 00:27:15.495 18:04:19 -- nvmf/common.sh@469 -- # nvmfpid=1802596 00:27:15.495 18:04:19 -- nvmf/common.sh@470 -- # waitforlisten 1802596 00:27:15.495 18:04:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:15.495 18:04:19 -- common/autotest_common.sh@819 -- # '[' -z 1802596 ']' 00:27:15.496 18:04:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.496 18:04:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:15.496 18:04:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.496 18:04:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:15.496 18:04:19 -- common/autotest_common.sh@10 -- # set +x 00:27:15.496 [2024-07-22 18:04:19.733034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:15.496 [2024-07-22 18:04:19.733121] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.757 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.757 [2024-07-22 18:04:19.830702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.757 [2024-07-22 18:04:19.922935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:15.757 [2024-07-22 18:04:19.923093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.757 [2024-07-22 18:04:19.923102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.757 [2024-07-22 18:04:19.923109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.757 [2024-07-22 18:04:19.923259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.757 [2024-07-22 18:04:19.923387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.757 [2024-07-22 18:04:19.923465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.757 [2024-07-22 18:04:19.923496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.328 18:04:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:16.328 18:04:20 -- common/autotest_common.sh@852 -- # return 0 00:27:16.328 18:04:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:16.328 18:04:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:16.328 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 18:04:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.589 18:04:20 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:16.589 18:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.589 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 [2024-07-22 18:04:20.629532] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:16.589 18:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.589 18:04:20 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:16.589 18:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.589 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 Malloc0 00:27:16.589 18:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.589 18:04:20 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:16.589 18:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.589 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 18:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.589 18:04:20 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:16.589 18:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.589 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 18:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.589 18:04:20 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.589 18:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.589 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 [2024-07-22 18:04:20.681601] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.589 18:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.589 18:04:20 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:16.589 18:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.589 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:27:16.589 [2024-07-22 18:04:20.689385] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:16.589 [ 00:27:16.589 { 00:27:16.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:16.589 "subtype": "Discovery", 00:27:16.589 "listen_addresses": [], 00:27:16.589 "allow_any_host": true, 00:27:16.589 "hosts": [] 00:27:16.589 }, 00:27:16.589 { 00:27:16.589 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.589 "subtype": "NVMe", 00:27:16.589 "listen_addresses": [ 00:27:16.589 { 00:27:16.589 "transport": "TCP", 00:27:16.589 "trtype": "TCP", 00:27:16.589 "adrfam": "IPv4", 00:27:16.589 "traddr": "10.0.0.2", 00:27:16.589 "trsvcid": "4420" 00:27:16.589 } 00:27:16.589 ], 00:27:16.589 "allow_any_host": true, 00:27:16.589 "hosts": [], 00:27:16.589 "serial_number": "SPDK00000000000001", 00:27:16.589 "model_number": "SPDK bdev Controller", 00:27:16.589 "max_namespaces": 2, 00:27:16.589 "min_cntlid": 1, 00:27:16.589 "max_cntlid": 65519, 00:27:16.589 "namespaces": [ 00:27:16.589 { 00:27:16.589 "nsid": 1, 00:27:16.589 "bdev_name": "Malloc0", 00:27:16.589 "name": "Malloc0", 00:27:16.589 "nguid": "00C4AC8E5559432C97546F7C2AFBBBD8", 00:27:16.589 "uuid": "00c4ac8e-5559-432c-9754-6f7c2afbbbd8" 00:27:16.589 } 00:27:16.589 ] 00:27:16.589 } 00:27:16.589 ] 00:27:16.589 18:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.589 18:04:20 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:16.589 18:04:20 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:16.589 18:04:20 -- host/aer.sh@33 -- # aerpid=1802797 00:27:16.589 18:04:20 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:16.589 18:04:20 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:16.589 18:04:20 -- common/autotest_common.sh@1244 -- # local i=0 00:27:16.589 18:04:20 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:16.589 18:04:20 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:27:16.589 18:04:20 -- common/autotest_common.sh@1247 -- # i=1 00:27:16.589 18:04:20 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:16.589 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.589 18:04:20 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:16.589 18:04:20 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:27:16.589 18:04:20 -- common/autotest_common.sh@1247 -- # i=2 00:27:16.589 18:04:20 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:16.850 18:04:20 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:16.850 18:04:20 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:27:16.850 18:04:20 -- common/autotest_common.sh@1247 -- # i=3 00:27:16.850 18:04:20 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:27:16.850 18:04:21 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:16.850 18:04:21 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:16.850 18:04:21 -- common/autotest_common.sh@1255 -- # return 0 00:27:16.850 18:04:21 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:16.850 18:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.850 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.850 Malloc1 00:27:16.850 18:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.850 18:04:21 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:16.850 18:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.850 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.850 18:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.850 18:04:21 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:16.850 18:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.850 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.850 [ 00:27:16.850 { 00:27:16.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:16.850 "subtype": "Discovery", 00:27:16.850 "listen_addresses": [], 00:27:16.850 "allow_any_host": true, 00:27:16.850 "hosts": [] 00:27:16.850 }, 00:27:16.850 { 00:27:16.850 Asynchronous Event Request test 00:27:16.850 Attaching to 10.0.0.2 00:27:16.850 Attached to 10.0.0.2 00:27:16.850 Registering asynchronous event callbacks... 00:27:16.850 Starting namespace attribute notice tests for all controllers... 00:27:16.850 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:16.850 aer_cb - Changed Namespace 00:27:16.850 Cleaning up... 00:27:16.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.850 "subtype": "NVMe", 00:27:16.850 "listen_addresses": [ 00:27:16.850 { 00:27:16.850 "transport": "TCP", 00:27:16.850 "trtype": "TCP", 00:27:16.850 "adrfam": "IPv4", 00:27:16.850 "traddr": "10.0.0.2", 00:27:16.850 "trsvcid": "4420" 00:27:16.850 } 00:27:16.850 ], 00:27:16.850 "allow_any_host": true, 00:27:16.850 "hosts": [], 00:27:16.850 "serial_number": "SPDK00000000000001", 00:27:16.850 "model_number": "SPDK bdev Controller", 00:27:16.850 "max_namespaces": 2, 00:27:16.850 "min_cntlid": 1, 00:27:16.850 "max_cntlid": 65519, 00:27:16.850 "namespaces": [ 00:27:16.850 { 00:27:16.850 "nsid": 1, 00:27:16.850 "bdev_name": "Malloc0", 00:27:16.850 "name": "Malloc0", 00:27:16.850 "nguid": "00C4AC8E5559432C97546F7C2AFBBBD8", 00:27:16.850 "uuid": "00c4ac8e-5559-432c-9754-6f7c2afbbbd8" 00:27:16.850 }, 00:27:16.850 { 00:27:16.850 "nsid": 2, 00:27:16.850 "bdev_name": "Malloc1", 00:27:16.850 "name": "Malloc1", 00:27:16.850 "nguid": "D261B3CAD7CB4B6CBAE38EC37C78C789", 00:27:16.850 "uuid": "d261b3ca-d7cb-4b6c-bae3-8ec37c78c789" 00:27:16.850 } 00:27:16.850 ] 00:27:16.850 } 00:27:16.850 ] 00:27:16.850 18:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.850 18:04:21 -- host/aer.sh@43 -- # wait 1802797 00:27:16.850 18:04:21 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:16.850 18:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.850 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.850 18:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.850 18:04:21 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:16.850 18:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.850 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:16.850 18:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:16.850 18:04:21 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.850 18:04:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:16.850 18:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:17.110 18:04:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.110 18:04:21 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:17.110 18:04:21 -- host/aer.sh@51 -- # nvmftestfini 00:27:17.110 18:04:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:17.110 18:04:21 -- nvmf/common.sh@116 -- # sync 00:27:17.110 18:04:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:17.110 18:04:21 -- nvmf/common.sh@119 -- # set +e 00:27:17.110 18:04:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:17.110 18:04:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:17.110 rmmod nvme_tcp 00:27:17.110 rmmod nvme_fabrics 00:27:17.110 rmmod nvme_keyring 00:27:17.110 18:04:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:17.110 18:04:21 -- nvmf/common.sh@123 -- # set -e 00:27:17.110 18:04:21 -- nvmf/common.sh@124 -- # return 0 00:27:17.110 18:04:21 -- nvmf/common.sh@477 -- # '[' -n 1802596 ']' 00:27:17.110 18:04:21 -- nvmf/common.sh@478 -- # killprocess 1802596 00:27:17.110 18:04:21 -- common/autotest_common.sh@926 -- # '[' -z 1802596 ']' 00:27:17.110 18:04:21 -- common/autotest_common.sh@930 -- # kill -0 1802596 00:27:17.110 18:04:21 -- common/autotest_common.sh@931 -- # uname 00:27:17.110 18:04:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:17.110 18:04:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1802596 00:27:17.110 18:04:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:17.110 18:04:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:17.110 18:04:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1802596' 00:27:17.110 killing process with pid 1802596 00:27:17.110 18:04:21 -- common/autotest_common.sh@945 -- # kill 1802596 00:27:17.110 [2024-07-22 18:04:21.265686] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:17.110 18:04:21 -- common/autotest_common.sh@950 -- # wait 1802596 00:27:17.370 18:04:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:17.370 18:04:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:17.370 18:04:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:17.370 18:04:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.370 18:04:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:17.370 18:04:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.370 18:04:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.370 18:04:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.283 18:04:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:19.283 00:27:19.283 real 0m12.303s 00:27:19.283 user 0m8.324s 00:27:19.283 sys 0m6.716s 00:27:19.283 18:04:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.283 18:04:23 -- common/autotest_common.sh@10 -- # set +x 00:27:19.283 ************************************ 00:27:19.283 END TEST nvmf_aer 00:27:19.283 ************************************ 00:27:19.283 18:04:23 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:19.283 18:04:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:19.283 18:04:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:19.283 18:04:23 -- common/autotest_common.sh@10 -- # set +x 00:27:19.283 ************************************ 00:27:19.283 START TEST nvmf_async_init 00:27:19.283 ************************************ 00:27:19.283 18:04:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:19.544 * Looking for test storage... 00:27:19.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.544 18:04:23 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.544 18:04:23 -- nvmf/common.sh@7 -- # uname -s 00:27:19.544 18:04:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.544 18:04:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.544 18:04:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.544 18:04:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.544 18:04:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.544 18:04:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.544 18:04:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.544 18:04:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.544 18:04:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.544 18:04:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.544 18:04:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:19.544 18:04:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:19.544 18:04:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.544 18:04:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.544 18:04:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.544 18:04:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.544 18:04:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.544 18:04:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.544 18:04:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.544 18:04:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.544 18:04:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.544 18:04:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.544 18:04:23 -- paths/export.sh@5 -- # export PATH 00:27:19.544 18:04:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.544 18:04:23 -- nvmf/common.sh@46 -- # : 0 00:27:19.544 18:04:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:19.544 18:04:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:19.544 18:04:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:19.544 18:04:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.544 18:04:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.544 18:04:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:19.544 18:04:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:19.544 18:04:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:19.544 18:04:23 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:19.544 18:04:23 -- host/async_init.sh@14 -- # null_block_size=512 00:27:19.544 18:04:23 -- host/async_init.sh@15 -- # null_bdev=null0 00:27:19.544 18:04:23 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:19.544 18:04:23 -- host/async_init.sh@20 -- # uuidgen 00:27:19.544 18:04:23 -- host/async_init.sh@20 -- # tr -d - 00:27:19.544 18:04:23 -- host/async_init.sh@20 -- # nguid=6851dd2765e04550a893c4fa52f9862f 00:27:19.544 18:04:23 -- host/async_init.sh@22 -- # nvmftestinit 00:27:19.544 18:04:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:19.544 18:04:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.544 18:04:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:19.544 18:04:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:19.544 18:04:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:19.544 18:04:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.544 18:04:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.544 18:04:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.544 18:04:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:19.544 18:04:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:19.544 18:04:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:19.544 18:04:23 -- common/autotest_common.sh@10 -- # set +x 00:27:27.685 18:04:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:27.685 18:04:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:27.685 18:04:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:27.685 18:04:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:27.685 18:04:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:27.685 18:04:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:27.685 18:04:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:27.685 18:04:31 -- nvmf/common.sh@294 -- # net_devs=() 00:27:27.685 18:04:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:27.685 18:04:31 -- nvmf/common.sh@295 -- # e810=() 00:27:27.685 18:04:31 -- nvmf/common.sh@295 -- # local -ga e810 00:27:27.685 18:04:31 -- nvmf/common.sh@296 -- # x722=() 00:27:27.685 18:04:31 -- nvmf/common.sh@296 -- # local -ga x722 00:27:27.685 18:04:31 -- nvmf/common.sh@297 -- # mlx=() 00:27:27.685 18:04:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:27.685 18:04:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.685 18:04:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:27.685 18:04:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:27.685 18:04:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:27.685 18:04:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:27.685 18:04:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:27.685 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:27.685 18:04:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:27.685 18:04:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:27.685 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:27.685 18:04:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:27.685 18:04:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:27.685 18:04:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.685 18:04:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:27.685 18:04:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.685 18:04:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:27.685 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:27.685 18:04:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.685 18:04:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:27.685 18:04:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.685 18:04:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:27.685 18:04:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.685 18:04:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:27.685 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:27.685 18:04:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.685 18:04:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:27.685 18:04:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:27.685 18:04:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:27.685 18:04:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:27.685 18:04:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.685 18:04:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.685 18:04:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.685 18:04:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:27.685 18:04:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.685 18:04:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.685 18:04:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:27.685 18:04:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.685 18:04:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.685 18:04:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:27.685 18:04:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:27.685 18:04:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.685 18:04:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.685 18:04:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.685 18:04:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.685 18:04:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:27.685 18:04:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.946 18:04:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.946 18:04:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.946 18:04:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:27.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:27:27.946 00:27:27.946 --- 10.0.0.2 ping statistics --- 00:27:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.946 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:27:27.946 18:04:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:27:27.946 00:27:27.946 --- 10.0.0.1 ping statistics --- 00:27:27.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.946 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:27:27.946 18:04:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.946 18:04:32 -- nvmf/common.sh@410 -- # return 0 00:27:27.946 18:04:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:27.946 18:04:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.946 18:04:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:27.946 18:04:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:27.946 18:04:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.946 18:04:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:27.946 18:04:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:27.946 18:04:32 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:27.946 18:04:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:27.946 18:04:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:27.946 18:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:27.946 18:04:32 -- nvmf/common.sh@469 -- # nvmfpid=1807302 00:27:27.946 18:04:32 -- nvmf/common.sh@470 -- # waitforlisten 1807302 00:27:27.946 18:04:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:27.946 18:04:32 -- common/autotest_common.sh@819 -- # '[' -z 1807302 ']' 00:27:27.946 18:04:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.946 18:04:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:27.946 18:04:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.946 18:04:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:27.946 18:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:27.946 [2024-07-22 18:04:32.093574] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:27.947 [2024-07-22 18:04:32.093635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.947 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.947 [2024-07-22 18:04:32.185875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.208 [2024-07-22 18:04:32.275728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:28.208 [2024-07-22 18:04:32.275878] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.208 [2024-07-22 18:04:32.275887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.208 [2024-07-22 18:04:32.275895] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.208 [2024-07-22 18:04:32.275933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.779 18:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:28.779 18:04:32 -- common/autotest_common.sh@852 -- # return 0 00:27:28.779 18:04:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:28.779 18:04:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:28.779 18:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 18:04:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.779 18:04:32 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:28.779 18:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.779 18:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 [2024-07-22 18:04:32.994077] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.779 18:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.779 18:04:32 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:28.779 18:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.779 18:04:32 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 null0 00:27:28.779 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.779 18:04:33 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:28.779 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.779 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.779 18:04:33 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:28.779 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.779 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.779 18:04:33 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6851dd2765e04550a893c4fa52f9862f 00:27:28.779 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.779 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.779 18:04:33 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:28.779 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.779 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:28.779 [2024-07-22 18:04:33.054425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.039 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.039 18:04:33 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:29.039 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.039 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.039 nvme0n1 00:27:29.039 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.039 18:04:33 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:29.039 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.039 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.039 [ 00:27:29.039 { 00:27:29.039 "name": "nvme0n1", 00:27:29.039 "aliases": [ 00:27:29.039 "6851dd27-65e0-4550-a893-c4fa52f9862f" 00:27:29.039 ], 00:27:29.039 "product_name": "NVMe disk", 00:27:29.039 "block_size": 512, 00:27:29.039 "num_blocks": 2097152, 00:27:29.039 "uuid": "6851dd27-65e0-4550-a893-c4fa52f9862f", 00:27:29.039 "assigned_rate_limits": { 00:27:29.039 "rw_ios_per_sec": 0, 00:27:29.039 "rw_mbytes_per_sec": 0, 00:27:29.039 "r_mbytes_per_sec": 0, 00:27:29.039 "w_mbytes_per_sec": 0 00:27:29.039 }, 00:27:29.039 "claimed": false, 00:27:29.039 "zoned": false, 00:27:29.039 "supported_io_types": { 00:27:29.039 "read": true, 00:27:29.039 "write": true, 00:27:29.039 "unmap": false, 00:27:29.039 "write_zeroes": true, 00:27:29.039 "flush": true, 00:27:29.039 "reset": true, 00:27:29.039 "compare": true, 00:27:29.039 "compare_and_write": true, 00:27:29.039 "abort": true, 00:27:29.039 "nvme_admin": true, 00:27:29.039 "nvme_io": true 00:27:29.039 }, 00:27:29.039 "driver_specific": { 00:27:29.039 "nvme": [ 00:27:29.039 { 00:27:29.039 "trid": { 00:27:29.039 "trtype": "TCP", 00:27:29.039 "adrfam": "IPv4", 00:27:29.039 "traddr": "10.0.0.2", 00:27:29.039 "trsvcid": "4420", 00:27:29.039 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:29.039 }, 00:27:29.039 "ctrlr_data": { 00:27:29.300 "cntlid": 1, 00:27:29.300 "vendor_id": "0x8086", 00:27:29.300 "model_number": "SPDK bdev Controller", 00:27:29.300 "serial_number": "00000000000000000000", 00:27:29.300 "firmware_revision": "24.01.1", 00:27:29.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.300 "oacs": { 00:27:29.300 "security": 0, 00:27:29.300 "format": 0, 00:27:29.300 "firmware": 0, 00:27:29.300 "ns_manage": 0 00:27:29.300 }, 00:27:29.300 "multi_ctrlr": true, 00:27:29.300 "ana_reporting": false 00:27:29.300 }, 00:27:29.300 "vs": { 00:27:29.300 "nvme_version": "1.3" 00:27:29.300 }, 00:27:29.300 "ns_data": { 00:27:29.300 "id": 1, 00:27:29.300 "can_share": true 00:27:29.300 } 00:27:29.300 } 00:27:29.300 ], 00:27:29.300 "mp_policy": "active_passive" 00:27:29.300 } 00:27:29.300 } 00:27:29.300 ] 00:27:29.300 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.300 18:04:33 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:29.300 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.300 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.300 [2024-07-22 18:04:33.324655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:29.300 [2024-07-22 18:04:33.324731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cf900 (9): Bad file descriptor 00:27:29.300 [2024-07-22 18:04:33.456440] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:29.300 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.300 18:04:33 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:29.300 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.300 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.300 [ 00:27:29.300 { 00:27:29.300 "name": "nvme0n1", 00:27:29.300 "aliases": [ 00:27:29.300 "6851dd27-65e0-4550-a893-c4fa52f9862f" 00:27:29.300 ], 00:27:29.300 "product_name": "NVMe disk", 00:27:29.300 "block_size": 512, 00:27:29.300 "num_blocks": 2097152, 00:27:29.300 "uuid": "6851dd27-65e0-4550-a893-c4fa52f9862f", 00:27:29.300 "assigned_rate_limits": { 00:27:29.300 "rw_ios_per_sec": 0, 00:27:29.300 "rw_mbytes_per_sec": 0, 00:27:29.300 "r_mbytes_per_sec": 0, 00:27:29.300 "w_mbytes_per_sec": 0 00:27:29.300 }, 00:27:29.300 "claimed": false, 00:27:29.300 "zoned": false, 00:27:29.300 "supported_io_types": { 00:27:29.300 "read": true, 00:27:29.300 "write": true, 00:27:29.300 "unmap": false, 00:27:29.300 "write_zeroes": true, 00:27:29.300 "flush": true, 00:27:29.300 "reset": true, 00:27:29.300 "compare": true, 00:27:29.300 "compare_and_write": true, 00:27:29.300 "abort": true, 00:27:29.300 "nvme_admin": true, 00:27:29.300 "nvme_io": true 00:27:29.300 }, 00:27:29.300 "driver_specific": { 00:27:29.300 "nvme": [ 00:27:29.300 { 00:27:29.300 "trid": { 00:27:29.300 "trtype": "TCP", 00:27:29.300 "adrfam": "IPv4", 00:27:29.300 "traddr": "10.0.0.2", 00:27:29.300 "trsvcid": "4420", 00:27:29.300 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:29.300 }, 00:27:29.300 "ctrlr_data": { 00:27:29.300 "cntlid": 2, 00:27:29.300 "vendor_id": "0x8086", 00:27:29.300 "model_number": "SPDK bdev Controller", 00:27:29.300 "serial_number": "00000000000000000000", 00:27:29.300 "firmware_revision": "24.01.1", 00:27:29.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.300 "oacs": { 00:27:29.300 "security": 0, 00:27:29.300 "format": 0, 00:27:29.300 "firmware": 0, 00:27:29.300 "ns_manage": 0 00:27:29.300 }, 00:27:29.300 "multi_ctrlr": true, 00:27:29.300 "ana_reporting": false 00:27:29.300 }, 00:27:29.300 "vs": { 00:27:29.300 "nvme_version": "1.3" 00:27:29.300 }, 00:27:29.300 "ns_data": { 00:27:29.300 "id": 1, 00:27:29.300 "can_share": true 00:27:29.300 } 00:27:29.301 } 00:27:29.301 ], 00:27:29.301 "mp_policy": "active_passive" 00:27:29.301 } 00:27:29.301 } 00:27:29.301 ] 00:27:29.301 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.301 18:04:33 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.301 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.301 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.301 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.301 18:04:33 -- host/async_init.sh@53 -- # mktemp 00:27:29.301 18:04:33 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZSBNODaFyI 00:27:29.301 18:04:33 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:29.301 18:04:33 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZSBNODaFyI 00:27:29.301 18:04:33 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:29.301 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.301 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.301 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.301 18:04:33 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:29.301 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.301 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.301 [2024-07-22 18:04:33.529279] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:29.301 [2024-07-22 18:04:33.529447] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:29.301 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.301 18:04:33 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZSBNODaFyI 00:27:29.301 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.301 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.301 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.301 18:04:33 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZSBNODaFyI 00:27:29.301 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.301 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.301 [2024-07-22 18:04:33.553337] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:29.562 nvme0n1 00:27:29.562 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.562 18:04:33 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:29.562 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.562 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.562 [ 00:27:29.562 { 00:27:29.562 "name": "nvme0n1", 00:27:29.562 "aliases": [ 00:27:29.562 "6851dd27-65e0-4550-a893-c4fa52f9862f" 00:27:29.562 ], 00:27:29.563 "product_name": "NVMe disk", 00:27:29.563 "block_size": 512, 00:27:29.563 "num_blocks": 2097152, 00:27:29.563 "uuid": "6851dd27-65e0-4550-a893-c4fa52f9862f", 00:27:29.563 "assigned_rate_limits": { 00:27:29.563 "rw_ios_per_sec": 0, 00:27:29.563 "rw_mbytes_per_sec": 0, 00:27:29.563 "r_mbytes_per_sec": 0, 00:27:29.563 "w_mbytes_per_sec": 0 00:27:29.563 }, 00:27:29.563 "claimed": false, 00:27:29.563 "zoned": false, 00:27:29.563 "supported_io_types": { 00:27:29.563 "read": true, 00:27:29.563 "write": true, 00:27:29.563 "unmap": false, 00:27:29.563 "write_zeroes": true, 00:27:29.563 "flush": true, 00:27:29.563 "reset": true, 00:27:29.563 "compare": true, 00:27:29.563 "compare_and_write": true, 00:27:29.563 "abort": true, 00:27:29.563 "nvme_admin": true, 00:27:29.563 "nvme_io": true 00:27:29.563 }, 00:27:29.563 "driver_specific": { 00:27:29.563 "nvme": [ 00:27:29.563 { 00:27:29.563 "trid": { 00:27:29.563 "trtype": "TCP", 00:27:29.563 "adrfam": "IPv4", 00:27:29.563 "traddr": "10.0.0.2", 00:27:29.563 "trsvcid": "4421", 00:27:29.563 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:29.563 }, 00:27:29.563 "ctrlr_data": { 00:27:29.563 "cntlid": 3, 00:27:29.563 "vendor_id": "0x8086", 00:27:29.563 "model_number": "SPDK bdev Controller", 00:27:29.563 "serial_number": "00000000000000000000", 00:27:29.563 "firmware_revision": "24.01.1", 00:27:29.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.563 "oacs": { 00:27:29.563 "security": 0, 00:27:29.563 "format": 0, 00:27:29.563 "firmware": 0, 00:27:29.563 "ns_manage": 0 00:27:29.563 }, 00:27:29.563 "multi_ctrlr": true, 00:27:29.563 "ana_reporting": false 00:27:29.563 }, 00:27:29.563 "vs": { 00:27:29.563 "nvme_version": "1.3" 00:27:29.563 }, 00:27:29.563 "ns_data": { 00:27:29.563 "id": 1, 00:27:29.563 "can_share": true 00:27:29.563 } 00:27:29.563 } 00:27:29.563 ], 00:27:29.563 "mp_policy": "active_passive" 00:27:29.563 } 00:27:29.563 } 00:27:29.563 ] 00:27:29.563 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.563 18:04:33 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.563 18:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:29.563 18:04:33 -- common/autotest_common.sh@10 -- # set +x 00:27:29.563 18:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:29.563 18:04:33 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ZSBNODaFyI 00:27:29.563 18:04:33 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:29.563 18:04:33 -- host/async_init.sh@78 -- # nvmftestfini 00:27:29.563 18:04:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:29.563 18:04:33 -- nvmf/common.sh@116 -- # sync 00:27:29.563 18:04:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:29.563 18:04:33 -- nvmf/common.sh@119 -- # set +e 00:27:29.563 18:04:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:29.563 18:04:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:29.563 rmmod nvme_tcp 00:27:29.563 rmmod nvme_fabrics 00:27:29.563 rmmod nvme_keyring 00:27:29.563 18:04:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:29.563 18:04:33 -- nvmf/common.sh@123 -- # set -e 00:27:29.563 18:04:33 -- nvmf/common.sh@124 -- # return 0 00:27:29.563 18:04:33 -- nvmf/common.sh@477 -- # '[' -n 1807302 ']' 00:27:29.563 18:04:33 -- nvmf/common.sh@478 -- # killprocess 1807302 00:27:29.563 18:04:33 -- common/autotest_common.sh@926 -- # '[' -z 1807302 ']' 00:27:29.563 18:04:33 -- common/autotest_common.sh@930 -- # kill -0 1807302 00:27:29.563 18:04:33 -- common/autotest_common.sh@931 -- # uname 00:27:29.563 18:04:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.563 18:04:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1807302 00:27:29.563 18:04:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.563 18:04:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.563 18:04:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1807302' 00:27:29.563 killing process with pid 1807302 00:27:29.563 18:04:33 -- common/autotest_common.sh@945 -- # kill 1807302 00:27:29.563 18:04:33 -- common/autotest_common.sh@950 -- # wait 1807302 00:27:29.824 18:04:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:29.824 18:04:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:29.824 18:04:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:29.824 18:04:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.824 18:04:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:29.824 18:04:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.824 18:04:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.824 18:04:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.370 18:04:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:32.370 00:27:32.370 real 0m12.523s 00:27:32.370 user 0m4.485s 00:27:32.370 sys 0m6.576s 00:27:32.370 18:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.370 18:04:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.370 ************************************ 00:27:32.370 END TEST nvmf_async_init 00:27:32.370 ************************************ 00:27:32.370 18:04:36 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:32.370 18:04:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:32.370 18:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.370 18:04:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.370 ************************************ 00:27:32.370 START TEST dma 00:27:32.370 ************************************ 00:27:32.370 18:04:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:32.370 * Looking for test storage... 00:27:32.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.370 18:04:36 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.370 18:04:36 -- nvmf/common.sh@7 -- # uname -s 00:27:32.370 18:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.370 18:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.370 18:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.370 18:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.370 18:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.370 18:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.370 18:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.370 18:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.370 18:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.370 18:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.370 18:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:32.370 18:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:32.370 18:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.370 18:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.370 18:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.370 18:04:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.370 18:04:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.370 18:04:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.370 18:04:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.370 18:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.370 18:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.370 18:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.370 18:04:36 -- paths/export.sh@5 -- # export PATH 00:27:32.370 18:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.370 18:04:36 -- nvmf/common.sh@46 -- # : 0 00:27:32.370 18:04:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:32.370 18:04:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:32.370 18:04:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:32.370 18:04:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.370 18:04:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.370 18:04:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:32.370 18:04:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:32.370 18:04:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:32.370 18:04:36 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:32.370 18:04:36 -- host/dma.sh@13 -- # exit 0 00:27:32.370 00:27:32.370 real 0m0.124s 00:27:32.370 user 0m0.058s 00:27:32.370 sys 0m0.075s 00:27:32.370 18:04:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.370 18:04:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.370 ************************************ 00:27:32.370 END TEST dma 00:27:32.370 ************************************ 00:27:32.370 18:04:36 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:32.370 18:04:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:32.370 18:04:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.370 18:04:36 -- common/autotest_common.sh@10 -- # set +x 00:27:32.370 ************************************ 00:27:32.370 START TEST nvmf_identify 00:27:32.370 ************************************ 00:27:32.370 18:04:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:32.370 * Looking for test storage... 00:27:32.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.370 18:04:36 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.370 18:04:36 -- nvmf/common.sh@7 -- # uname -s 00:27:32.370 18:04:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.370 18:04:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.370 18:04:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.370 18:04:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.370 18:04:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.370 18:04:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.370 18:04:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.370 18:04:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.370 18:04:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.370 18:04:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.370 18:04:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:32.370 18:04:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:32.370 18:04:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.370 18:04:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.370 18:04:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.370 18:04:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.370 18:04:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.370 18:04:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.370 18:04:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.370 18:04:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.370 18:04:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.371 18:04:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.371 18:04:36 -- paths/export.sh@5 -- # export PATH 00:27:32.371 18:04:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.371 18:04:36 -- nvmf/common.sh@46 -- # : 0 00:27:32.371 18:04:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:32.371 18:04:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:32.371 18:04:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:32.371 18:04:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.371 18:04:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.371 18:04:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:32.371 18:04:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:32.371 18:04:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:32.371 18:04:36 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:32.371 18:04:36 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:32.371 18:04:36 -- host/identify.sh@14 -- # nvmftestinit 00:27:32.371 18:04:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:32.371 18:04:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.371 18:04:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:32.371 18:04:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:32.371 18:04:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:32.371 18:04:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.371 18:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:32.371 18:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.371 18:04:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:32.371 18:04:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:32.371 18:04:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:32.371 18:04:36 -- common/autotest_common.sh@10 -- # set +x 00:27:40.512 18:04:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:40.512 18:04:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:40.512 18:04:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:40.512 18:04:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:40.512 18:04:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:40.512 18:04:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:40.512 18:04:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:40.512 18:04:44 -- nvmf/common.sh@294 -- # net_devs=() 00:27:40.512 18:04:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:40.512 18:04:44 -- nvmf/common.sh@295 -- # e810=() 00:27:40.512 18:04:44 -- nvmf/common.sh@295 -- # local -ga e810 00:27:40.512 18:04:44 -- nvmf/common.sh@296 -- # x722=() 00:27:40.512 18:04:44 -- nvmf/common.sh@296 -- # local -ga x722 00:27:40.512 18:04:44 -- nvmf/common.sh@297 -- # mlx=() 00:27:40.512 18:04:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:40.512 18:04:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.512 18:04:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:40.512 18:04:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:40.512 18:04:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:40.512 18:04:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.512 18:04:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:40.512 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:40.512 18:04:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.512 18:04:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:40.512 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:40.512 18:04:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:40.512 18:04:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.512 18:04:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.512 18:04:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.512 18:04:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.512 18:04:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:40.512 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:40.512 18:04:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.512 18:04:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.512 18:04:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.512 18:04:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.512 18:04:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.512 18:04:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:40.512 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:40.512 18:04:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.512 18:04:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:40.512 18:04:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:40.512 18:04:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:40.512 18:04:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:40.512 18:04:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.512 18:04:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.512 18:04:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.512 18:04:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:40.512 18:04:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.512 18:04:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.512 18:04:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:40.512 18:04:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.512 18:04:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.512 18:04:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:40.512 18:04:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:40.512 18:04:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.512 18:04:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.512 18:04:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.512 18:04:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.512 18:04:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:40.512 18:04:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.512 18:04:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.512 18:04:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.512 18:04:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:40.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:27:40.512 00:27:40.512 --- 10.0.0.2 ping statistics --- 00:27:40.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.512 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:27:40.512 18:04:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:27:40.512 00:27:40.512 --- 10.0.0.1 ping statistics --- 00:27:40.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.512 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:27:40.512 18:04:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.512 18:04:44 -- nvmf/common.sh@410 -- # return 0 00:27:40.512 18:04:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:40.512 18:04:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.512 18:04:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:40.513 18:04:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:40.513 18:04:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.513 18:04:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:40.513 18:04:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:40.513 18:04:44 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:40.513 18:04:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:40.513 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 18:04:44 -- host/identify.sh@19 -- # nvmfpid=1812114 00:27:40.513 18:04:44 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.513 18:04:44 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:40.513 18:04:44 -- host/identify.sh@23 -- # waitforlisten 1812114 00:27:40.513 18:04:44 -- common/autotest_common.sh@819 -- # '[' -z 1812114 ']' 00:27:40.513 18:04:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.513 18:04:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:40.513 18:04:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.513 18:04:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:40.513 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.513 [2024-07-22 18:04:44.728792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:40.513 [2024-07-22 18:04:44.728856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.513 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.773 [2024-07-22 18:04:44.822464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.773 [2024-07-22 18:04:44.910952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:40.773 [2024-07-22 18:04:44.911113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.773 [2024-07-22 18:04:44.911122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.773 [2024-07-22 18:04:44.911129] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.773 [2024-07-22 18:04:44.911280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.773 [2024-07-22 18:04:44.911394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.773 [2024-07-22 18:04:44.911473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.773 [2024-07-22 18:04:44.911476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.343 18:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:41.343 18:04:45 -- common/autotest_common.sh@852 -- # return 0 00:27:41.343 18:04:45 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.343 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.343 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.343 [2024-07-22 18:04:45.585440] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.343 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.343 18:04:45 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:41.343 18:04:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:41.343 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 18:04:45 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.604 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.604 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.604 Malloc0 00:27:41.605 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 18:04:45 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.605 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 18:04:45 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:41.605 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 18:04:45 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.605 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 [2024-07-22 18:04:45.681938] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.605 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 18:04:45 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:41.605 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 18:04:45 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:41.605 18:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.605 18:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:41.605 [2024-07-22 18:04:45.705762] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:41.605 [ 00:27:41.605 { 00:27:41.605 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:41.605 "subtype": "Discovery", 00:27:41.605 "listen_addresses": [ 00:27:41.605 { 00:27:41.605 "transport": "TCP", 00:27:41.605 "trtype": "TCP", 00:27:41.605 "adrfam": "IPv4", 00:27:41.605 "traddr": "10.0.0.2", 00:27:41.605 "trsvcid": "4420" 00:27:41.605 } 00:27:41.605 ], 00:27:41.605 "allow_any_host": true, 00:27:41.605 "hosts": [] 00:27:41.605 }, 00:27:41.605 { 00:27:41.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.605 "subtype": "NVMe", 00:27:41.605 "listen_addresses": [ 00:27:41.605 { 00:27:41.605 "transport": "TCP", 00:27:41.605 "trtype": "TCP", 00:27:41.605 "adrfam": "IPv4", 00:27:41.605 "traddr": "10.0.0.2", 00:27:41.605 "trsvcid": "4420" 00:27:41.605 } 00:27:41.605 ], 00:27:41.605 "allow_any_host": true, 00:27:41.605 "hosts": [], 00:27:41.605 "serial_number": "SPDK00000000000001", 00:27:41.605 "model_number": "SPDK bdev Controller", 00:27:41.605 "max_namespaces": 32, 00:27:41.605 "min_cntlid": 1, 00:27:41.605 "max_cntlid": 65519, 00:27:41.605 "namespaces": [ 00:27:41.605 { 00:27:41.605 "nsid": 1, 00:27:41.605 "bdev_name": "Malloc0", 00:27:41.605 "name": "Malloc0", 00:27:41.605 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:41.605 "eui64": "ABCDEF0123456789", 00:27:41.605 "uuid": "bbb3c770-362a-467d-bfd2-b8cfc1f22a57" 00:27:41.605 } 00:27:41.605 ] 00:27:41.605 } 00:27:41.605 ] 00:27:41.605 18:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:41.605 18:04:45 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:41.605 [2024-07-22 18:04:45.742026] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:41.605 [2024-07-22 18:04:45.742082] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812207 ] 00:27:41.605 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.605 [2024-07-22 18:04:45.773160] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:41.605 [2024-07-22 18:04:45.773200] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:41.605 [2024-07-22 18:04:45.773204] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:41.605 [2024-07-22 18:04:45.773215] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:41.605 [2024-07-22 18:04:45.773222] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:41.605 [2024-07-22 18:04:45.776383] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:41.605 [2024-07-22 18:04:45.776414] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x89a9e0 0 00:27:41.605 [2024-07-22 18:04:45.784360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:41.605 [2024-07-22 18:04:45.784370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:41.605 [2024-07-22 18:04:45.784374] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:41.605 [2024-07-22 18:04:45.784377] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:41.605 [2024-07-22 18:04:45.784409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.784414] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.784418] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.605 [2024-07-22 18:04:45.784430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:41.605 [2024-07-22 18:04:45.784447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.605 [2024-07-22 18:04:45.792360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.605 [2024-07-22 18:04:45.792369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.605 [2024-07-22 18:04:45.792372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.605 [2024-07-22 18:04:45.792388] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:41.605 [2024-07-22 18:04:45.792394] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:41.605 [2024-07-22 18:04:45.792399] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:41.605 [2024-07-22 18:04:45.792409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.605 [2024-07-22 18:04:45.792423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.605 [2024-07-22 18:04:45.792434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.605 [2024-07-22 18:04:45.792596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.605 [2024-07-22 18:04:45.792602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.605 [2024-07-22 18:04:45.792606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.605 [2024-07-22 18:04:45.792614] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:41.605 [2024-07-22 18:04:45.792621] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:41.605 [2024-07-22 18:04:45.792627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792634] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.605 [2024-07-22 18:04:45.792640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.605 [2024-07-22 18:04:45.792649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.605 [2024-07-22 18:04:45.792812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.605 [2024-07-22 18:04:45.792818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.605 [2024-07-22 18:04:45.792822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792825] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.605 [2024-07-22 18:04:45.792830] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:41.605 [2024-07-22 18:04:45.792837] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:41.605 [2024-07-22 18:04:45.792843] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792847] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.792850] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.605 [2024-07-22 18:04:45.792856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.605 [2024-07-22 18:04:45.792868] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.605 [2024-07-22 18:04:45.793030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.605 [2024-07-22 18:04:45.793036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.605 [2024-07-22 18:04:45.793039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.793043] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.605 [2024-07-22 18:04:45.793047] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:41.605 [2024-07-22 18:04:45.793055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.793059] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.605 [2024-07-22 18:04:45.793062] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.793069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.606 [2024-07-22 18:04:45.793078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.606 [2024-07-22 18:04:45.793230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.606 [2024-07-22 18:04:45.793236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.606 [2024-07-22 18:04:45.793239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.606 [2024-07-22 18:04:45.793247] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:41.606 [2024-07-22 18:04:45.793251] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:41.606 [2024-07-22 18:04:45.793258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:41.606 [2024-07-22 18:04:45.793363] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:41.606 [2024-07-22 18:04:45.793368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:41.606 [2024-07-22 18:04:45.793375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793379] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793382] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.793388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.606 [2024-07-22 18:04:45.793398] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.606 [2024-07-22 18:04:45.793581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.606 [2024-07-22 18:04:45.793588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.606 [2024-07-22 18:04:45.793591] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793594] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.606 [2024-07-22 18:04:45.793599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:41.606 [2024-07-22 18:04:45.793607] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.793622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.606 [2024-07-22 18:04:45.793631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.606 [2024-07-22 18:04:45.793833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.606 [2024-07-22 18:04:45.793839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.606 [2024-07-22 18:04:45.793842] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.606 [2024-07-22 18:04:45.793850] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:41.606 [2024-07-22 18:04:45.793854] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:41.606 [2024-07-22 18:04:45.793861] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:41.606 [2024-07-22 18:04:45.793868] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:41.606 [2024-07-22 18:04:45.793877] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793880] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.793884] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.793890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.606 [2024-07-22 18:04:45.793899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.606 [2024-07-22 18:04:45.794085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.606 [2024-07-22 18:04:45.794091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.606 [2024-07-22 18:04:45.794095] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.794098] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89a9e0): datao=0, datal=4096, cccid=0 00:27:41.606 [2024-07-22 18:04:45.794103] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x902730) on tqpair(0x89a9e0): expected_datao=0, payload_size=4096 00:27:41.606 [2024-07-22 18:04:45.794118] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.794123] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.606 [2024-07-22 18:04:45.834554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.606 [2024-07-22 18:04:45.834557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.606 [2024-07-22 18:04:45.834568] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:41.606 [2024-07-22 18:04:45.834573] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:41.606 [2024-07-22 18:04:45.834577] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:41.606 [2024-07-22 18:04:45.834582] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:41.606 [2024-07-22 18:04:45.834586] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:41.606 [2024-07-22 18:04:45.834590] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:41.606 [2024-07-22 18:04:45.834603] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:41.606 [2024-07-22 18:04:45.834609] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834613] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.834624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.606 [2024-07-22 18:04:45.834634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.606 [2024-07-22 18:04:45.834794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.606 [2024-07-22 18:04:45.834799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.606 [2024-07-22 18:04:45.834803] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902730) on tqpair=0x89a9e0 00:27:41.606 [2024-07-22 18:04:45.834814] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834817] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834820] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.834826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.606 [2024-07-22 18:04:45.834831] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834838] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.834843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.606 [2024-07-22 18:04:45.834849] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834853] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.834861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.606 [2024-07-22 18:04:45.834866] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.834878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.606 [2024-07-22 18:04:45.834882] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:41.606 [2024-07-22 18:04:45.834892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:41.606 [2024-07-22 18:04:45.834898] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.606 [2024-07-22 18:04:45.834904] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89a9e0) 00:27:41.606 [2024-07-22 18:04:45.834910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.606 [2024-07-22 18:04:45.834921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902730, cid 0, qid 0 00:27:41.606 [2024-07-22 18:04:45.834928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902890, cid 1, qid 0 00:27:41.606 [2024-07-22 18:04:45.834932] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9029f0, cid 2, qid 0 00:27:41.606 [2024-07-22 18:04:45.834936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.606 [2024-07-22 18:04:45.834940] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902cb0, cid 4, qid 0 00:27:41.606 [2024-07-22 18:04:45.835166] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.606 [2024-07-22 18:04:45.835172] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.607 [2024-07-22 18:04:45.835175] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.835179] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902cb0) on tqpair=0x89a9e0 00:27:41.607 [2024-07-22 18:04:45.835184] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:41.607 [2024-07-22 18:04:45.835188] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:41.607 [2024-07-22 18:04:45.835198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.835201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.835204] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89a9e0) 00:27:41.607 [2024-07-22 18:04:45.835210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.607 [2024-07-22 18:04:45.835219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902cb0, cid 4, qid 0 00:27:41.607 [2024-07-22 18:04:45.839359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.607 [2024-07-22 18:04:45.839368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.607 [2024-07-22 18:04:45.839372] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839375] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89a9e0): datao=0, datal=4096, cccid=4 00:27:41.607 [2024-07-22 18:04:45.839379] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x902cb0) on tqpair(0x89a9e0): expected_datao=0, payload_size=4096 00:27:41.607 [2024-07-22 18:04:45.839386] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839390] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.607 [2024-07-22 18:04:45.839400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.607 [2024-07-22 18:04:45.839403] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902cb0) on tqpair=0x89a9e0 00:27:41.607 [2024-07-22 18:04:45.839419] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:41.607 [2024-07-22 18:04:45.839440] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89a9e0) 00:27:41.607 [2024-07-22 18:04:45.839454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.607 [2024-07-22 18:04:45.839460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x89a9e0) 00:27:41.607 [2024-07-22 18:04:45.839472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.607 [2024-07-22 18:04:45.839489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902cb0, cid 4, qid 0 00:27:41.607 [2024-07-22 18:04:45.839494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902e10, cid 5, qid 0 00:27:41.607 [2024-07-22 18:04:45.839766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.607 [2024-07-22 18:04:45.839772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.607 [2024-07-22 18:04:45.839775] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839778] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89a9e0): datao=0, datal=1024, cccid=4 00:27:41.607 [2024-07-22 18:04:45.839782] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x902cb0) on tqpair(0x89a9e0): expected_datao=0, payload_size=1024 00:27:41.607 [2024-07-22 18:04:45.839789] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839792] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839797] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.607 [2024-07-22 18:04:45.839802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.607 [2024-07-22 18:04:45.839805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.607 [2024-07-22 18:04:45.839809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902e10) on tqpair=0x89a9e0 00:27:41.876 [2024-07-22 18:04:45.880579] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.876 [2024-07-22 18:04:45.880589] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.876 [2024-07-22 18:04:45.880592] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.880596] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902cb0) on tqpair=0x89a9e0 00:27:41.876 [2024-07-22 18:04:45.880606] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.880610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.880613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89a9e0) 00:27:41.876 [2024-07-22 18:04:45.880619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.876 [2024-07-22 18:04:45.880632] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902cb0, cid 4, qid 0 00:27:41.876 [2024-07-22 18:04:45.880813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.876 [2024-07-22 18:04:45.880819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.876 [2024-07-22 18:04:45.880822] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.880825] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89a9e0): datao=0, datal=3072, cccid=4 00:27:41.876 [2024-07-22 18:04:45.880829] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x902cb0) on tqpair(0x89a9e0): expected_datao=0, payload_size=3072 00:27:41.876 [2024-07-22 18:04:45.880855] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.880858] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.921516] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.876 [2024-07-22 18:04:45.921528] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.876 [2024-07-22 18:04:45.921531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.921535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902cb0) on tqpair=0x89a9e0 00:27:41.876 [2024-07-22 18:04:45.921544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.921548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.921551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x89a9e0) 00:27:41.876 [2024-07-22 18:04:45.921558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.876 [2024-07-22 18:04:45.921574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902cb0, cid 4, qid 0 00:27:41.876 [2024-07-22 18:04:45.921811] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.876 [2024-07-22 18:04:45.921817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.876 [2024-07-22 18:04:45.921820] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.921823] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x89a9e0): datao=0, datal=8, cccid=4 00:27:41.876 [2024-07-22 18:04:45.921827] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x902cb0) on tqpair(0x89a9e0): expected_datao=0, payload_size=8 00:27:41.876 [2024-07-22 18:04:45.921834] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.921837] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.966359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.876 [2024-07-22 18:04:45.966368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.876 [2024-07-22 18:04:45.966371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.876 [2024-07-22 18:04:45.966375] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902cb0) on tqpair=0x89a9e0 00:27:41.876 ===================================================== 00:27:41.876 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:41.876 ===================================================== 00:27:41.876 Controller Capabilities/Features 00:27:41.876 ================================ 00:27:41.876 Vendor ID: 0000 00:27:41.876 Subsystem Vendor ID: 0000 00:27:41.876 Serial Number: .................... 00:27:41.876 Model Number: ........................................ 00:27:41.876 Firmware Version: 24.01.1 00:27:41.876 Recommended Arb Burst: 0 00:27:41.876 IEEE OUI Identifier: 00 00 00 00:27:41.876 Multi-path I/O 00:27:41.876 May have multiple subsystem ports: No 00:27:41.876 May have multiple controllers: No 00:27:41.876 Associated with SR-IOV VF: No 00:27:41.876 Max Data Transfer Size: 131072 00:27:41.876 Max Number of Namespaces: 0 00:27:41.876 Max Number of I/O Queues: 1024 00:27:41.876 NVMe Specification Version (VS): 1.3 00:27:41.876 NVMe Specification Version (Identify): 1.3 00:27:41.876 Maximum Queue Entries: 128 00:27:41.876 Contiguous Queues Required: Yes 00:27:41.876 Arbitration Mechanisms Supported 00:27:41.876 Weighted Round Robin: Not Supported 00:27:41.876 Vendor Specific: Not Supported 00:27:41.876 Reset Timeout: 15000 ms 00:27:41.876 Doorbell Stride: 4 bytes 00:27:41.876 NVM Subsystem Reset: Not Supported 00:27:41.876 Command Sets Supported 00:27:41.876 NVM Command Set: Supported 00:27:41.876 Boot Partition: Not Supported 00:27:41.876 Memory Page Size Minimum: 4096 bytes 00:27:41.876 Memory Page Size Maximum: 4096 bytes 00:27:41.876 Persistent Memory Region: Not Supported 00:27:41.876 Optional Asynchronous Events Supported 00:27:41.876 Namespace Attribute Notices: Not Supported 00:27:41.876 Firmware Activation Notices: Not Supported 00:27:41.876 ANA Change Notices: Not Supported 00:27:41.876 PLE Aggregate Log Change Notices: Not Supported 00:27:41.876 LBA Status Info Alert Notices: Not Supported 00:27:41.876 EGE Aggregate Log Change Notices: Not Supported 00:27:41.876 Normal NVM Subsystem Shutdown event: Not Supported 00:27:41.876 Zone Descriptor Change Notices: Not Supported 00:27:41.876 Discovery Log Change Notices: Supported 00:27:41.876 Controller Attributes 00:27:41.876 128-bit Host Identifier: Not Supported 00:27:41.876 Non-Operational Permissive Mode: Not Supported 00:27:41.876 NVM Sets: Not Supported 00:27:41.876 Read Recovery Levels: Not Supported 00:27:41.876 Endurance Groups: Not Supported 00:27:41.876 Predictable Latency Mode: Not Supported 00:27:41.876 Traffic Based Keep ALive: Not Supported 00:27:41.876 Namespace Granularity: Not Supported 00:27:41.876 SQ Associations: Not Supported 00:27:41.876 UUID List: Not Supported 00:27:41.876 Multi-Domain Subsystem: Not Supported 00:27:41.876 Fixed Capacity Management: Not Supported 00:27:41.876 Variable Capacity Management: Not Supported 00:27:41.876 Delete Endurance Group: Not Supported 00:27:41.876 Delete NVM Set: Not Supported 00:27:41.876 Extended LBA Formats Supported: Not Supported 00:27:41.876 Flexible Data Placement Supported: Not Supported 00:27:41.876 00:27:41.876 Controller Memory Buffer Support 00:27:41.876 ================================ 00:27:41.876 Supported: No 00:27:41.876 00:27:41.876 Persistent Memory Region Support 00:27:41.876 ================================ 00:27:41.876 Supported: No 00:27:41.876 00:27:41.877 Admin Command Set Attributes 00:27:41.877 ============================ 00:27:41.877 Security Send/Receive: Not Supported 00:27:41.877 Format NVM: Not Supported 00:27:41.877 Firmware Activate/Download: Not Supported 00:27:41.877 Namespace Management: Not Supported 00:27:41.877 Device Self-Test: Not Supported 00:27:41.877 Directives: Not Supported 00:27:41.877 NVMe-MI: Not Supported 00:27:41.877 Virtualization Management: Not Supported 00:27:41.877 Doorbell Buffer Config: Not Supported 00:27:41.877 Get LBA Status Capability: Not Supported 00:27:41.877 Command & Feature Lockdown Capability: Not Supported 00:27:41.877 Abort Command Limit: 1 00:27:41.877 Async Event Request Limit: 4 00:27:41.877 Number of Firmware Slots: N/A 00:27:41.877 Firmware Slot 1 Read-Only: N/A 00:27:41.877 Firmware Activation Without Reset: N/A 00:27:41.877 Multiple Update Detection Support: N/A 00:27:41.877 Firmware Update Granularity: No Information Provided 00:27:41.877 Per-Namespace SMART Log: No 00:27:41.877 Asymmetric Namespace Access Log Page: Not Supported 00:27:41.877 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:41.877 Command Effects Log Page: Not Supported 00:27:41.877 Get Log Page Extended Data: Supported 00:27:41.877 Telemetry Log Pages: Not Supported 00:27:41.877 Persistent Event Log Pages: Not Supported 00:27:41.877 Supported Log Pages Log Page: May Support 00:27:41.877 Commands Supported & Effects Log Page: Not Supported 00:27:41.877 Feature Identifiers & Effects Log Page:May Support 00:27:41.877 NVMe-MI Commands & Effects Log Page: May Support 00:27:41.877 Data Area 4 for Telemetry Log: Not Supported 00:27:41.877 Error Log Page Entries Supported: 128 00:27:41.877 Keep Alive: Not Supported 00:27:41.877 00:27:41.877 NVM Command Set Attributes 00:27:41.877 ========================== 00:27:41.877 Submission Queue Entry Size 00:27:41.877 Max: 1 00:27:41.877 Min: 1 00:27:41.877 Completion Queue Entry Size 00:27:41.877 Max: 1 00:27:41.877 Min: 1 00:27:41.877 Number of Namespaces: 0 00:27:41.877 Compare Command: Not Supported 00:27:41.877 Write Uncorrectable Command: Not Supported 00:27:41.877 Dataset Management Command: Not Supported 00:27:41.877 Write Zeroes Command: Not Supported 00:27:41.877 Set Features Save Field: Not Supported 00:27:41.877 Reservations: Not Supported 00:27:41.877 Timestamp: Not Supported 00:27:41.877 Copy: Not Supported 00:27:41.877 Volatile Write Cache: Not Present 00:27:41.877 Atomic Write Unit (Normal): 1 00:27:41.877 Atomic Write Unit (PFail): 1 00:27:41.877 Atomic Compare & Write Unit: 1 00:27:41.877 Fused Compare & Write: Supported 00:27:41.877 Scatter-Gather List 00:27:41.877 SGL Command Set: Supported 00:27:41.877 SGL Keyed: Supported 00:27:41.877 SGL Bit Bucket Descriptor: Not Supported 00:27:41.877 SGL Metadata Pointer: Not Supported 00:27:41.877 Oversized SGL: Not Supported 00:27:41.877 SGL Metadata Address: Not Supported 00:27:41.877 SGL Offset: Supported 00:27:41.877 Transport SGL Data Block: Not Supported 00:27:41.877 Replay Protected Memory Block: Not Supported 00:27:41.877 00:27:41.877 Firmware Slot Information 00:27:41.877 ========================= 00:27:41.877 Active slot: 0 00:27:41.877 00:27:41.877 00:27:41.877 Error Log 00:27:41.877 ========= 00:27:41.877 00:27:41.877 Active Namespaces 00:27:41.877 ================= 00:27:41.877 Discovery Log Page 00:27:41.877 ================== 00:27:41.877 Generation Counter: 2 00:27:41.877 Number of Records: 2 00:27:41.877 Record Format: 0 00:27:41.877 00:27:41.877 Discovery Log Entry 0 00:27:41.877 ---------------------- 00:27:41.877 Transport Type: 3 (TCP) 00:27:41.877 Address Family: 1 (IPv4) 00:27:41.877 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:41.877 Entry Flags: 00:27:41.877 Duplicate Returned Information: 1 00:27:41.877 Explicit Persistent Connection Support for Discovery: 1 00:27:41.877 Transport Requirements: 00:27:41.877 Secure Channel: Not Required 00:27:41.877 Port ID: 0 (0x0000) 00:27:41.877 Controller ID: 65535 (0xffff) 00:27:41.877 Admin Max SQ Size: 128 00:27:41.877 Transport Service Identifier: 4420 00:27:41.877 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:41.877 Transport Address: 10.0.0.2 00:27:41.877 Discovery Log Entry 1 00:27:41.877 ---------------------- 00:27:41.877 Transport Type: 3 (TCP) 00:27:41.877 Address Family: 1 (IPv4) 00:27:41.877 Subsystem Type: 2 (NVM Subsystem) 00:27:41.877 Entry Flags: 00:27:41.877 Duplicate Returned Information: 0 00:27:41.877 Explicit Persistent Connection Support for Discovery: 0 00:27:41.877 Transport Requirements: 00:27:41.877 Secure Channel: Not Required 00:27:41.877 Port ID: 0 (0x0000) 00:27:41.877 Controller ID: 65535 (0xffff) 00:27:41.877 Admin Max SQ Size: 128 00:27:41.877 Transport Service Identifier: 4420 00:27:41.877 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:41.877 Transport Address: 10.0.0.2 [2024-07-22 18:04:45.966458] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:41.877 [2024-07-22 18:04:45.966470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.877 [2024-07-22 18:04:45.966476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.877 [2024-07-22 18:04:45.966481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.877 [2024-07-22 18:04:45.966487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.877 [2024-07-22 18:04:45.966494] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966501] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.877 [2024-07-22 18:04:45.966508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.877 [2024-07-22 18:04:45.966520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.877 [2024-07-22 18:04:45.966604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.877 [2024-07-22 18:04:45.966609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.877 [2024-07-22 18:04:45.966613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.877 [2024-07-22 18:04:45.966622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966629] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.877 [2024-07-22 18:04:45.966635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.877 [2024-07-22 18:04:45.966647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.877 [2024-07-22 18:04:45.966867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.877 [2024-07-22 18:04:45.966873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.877 [2024-07-22 18:04:45.966876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.877 [2024-07-22 18:04:45.966886] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:41.877 [2024-07-22 18:04:45.966890] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:41.877 [2024-07-22 18:04:45.966899] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.966906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.877 [2024-07-22 18:04:45.966912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.877 [2024-07-22 18:04:45.966921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.877 [2024-07-22 18:04:45.967078] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.877 [2024-07-22 18:04:45.967084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.877 [2024-07-22 18:04:45.967087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.967090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.877 [2024-07-22 18:04:45.967099] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.967103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.877 [2024-07-22 18:04:45.967106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.967112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.967121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.967281] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.967287] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.967290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.967302] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.967315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.967324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.967523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.967529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.967532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967536] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.967545] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.967558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.967567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.967746] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.967752] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.967756] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967760] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.967769] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967775] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.967781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.967790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.967973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.967978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.967981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967985] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.967993] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.967997] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.968006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.968015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.968181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.968186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.968190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.968202] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.968214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.968223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.968426] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.968432] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.968436] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.968448] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968451] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.968460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.968469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.968688] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.968694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.968697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.968712] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968719] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.968725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.968734] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.968907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.968913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.968916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.968928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.968935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.968941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.968950] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.969121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.969127] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.969130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.969142] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.969155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.969163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.969365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.969372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.969375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969378] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.969387] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.969400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.969409] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.969597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.969603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.969606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.878 [2024-07-22 18:04:45.969619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.878 [2024-07-22 18:04:45.969632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.878 [2024-07-22 18:04:45.969641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.878 [2024-07-22 18:04:45.969821] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.878 [2024-07-22 18:04:45.969827] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.878 [2024-07-22 18:04:45.969830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.878 [2024-07-22 18:04:45.969833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.879 [2024-07-22 18:04:45.969842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.969846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.969849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.879 [2024-07-22 18:04:45.969855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:45.969864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.879 [2024-07-22 18:04:45.970062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:45.970068] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:45.970071] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.970075] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.879 [2024-07-22 18:04:45.970083] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.970087] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.970090] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.879 [2024-07-22 18:04:45.970096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:45.970105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.879 [2024-07-22 18:04:45.970302] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:45.970308] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:45.970311] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.970314] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.879 [2024-07-22 18:04:45.970323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.970326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.970329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x89a9e0) 00:27:41.879 [2024-07-22 18:04:45.970335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:45.970344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x902b50, cid 3, qid 0 00:27:41.879 [2024-07-22 18:04:45.974356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:45.974363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:45.974366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:45.974369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x902b50) on tqpair=0x89a9e0 00:27:41.879 [2024-07-22 18:04:45.974376] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:41.879 00:27:41.879 18:04:45 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:41.879 [2024-07-22 18:04:46.011643] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:41.879 [2024-07-22 18:04:46.011709] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1812231 ] 00:27:41.879 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.879 [2024-07-22 18:04:46.043010] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:41.879 [2024-07-22 18:04:46.043051] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:41.879 [2024-07-22 18:04:46.043055] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:41.879 [2024-07-22 18:04:46.043065] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:41.879 [2024-07-22 18:04:46.043071] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:41.879 [2024-07-22 18:04:46.046378] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:41.879 [2024-07-22 18:04:46.046407] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d6d9e0 0 00:27:41.879 [2024-07-22 18:04:46.054360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:41.879 [2024-07-22 18:04:46.054368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:41.879 [2024-07-22 18:04:46.054372] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:41.879 [2024-07-22 18:04:46.054375] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:41.879 [2024-07-22 18:04:46.054402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.054407] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.054411] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.879 [2024-07-22 18:04:46.054421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:41.879 [2024-07-22 18:04:46.054435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.879 [2024-07-22 18:04:46.062361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:46.062369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:46.062373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.879 [2024-07-22 18:04:46.062386] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:41.879 [2024-07-22 18:04:46.062391] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:41.879 [2024-07-22 18:04:46.062396] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:41.879 [2024-07-22 18:04:46.062406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062409] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062413] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.879 [2024-07-22 18:04:46.062420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:46.062434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.879 [2024-07-22 18:04:46.062623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:46.062629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:46.062632] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.879 [2024-07-22 18:04:46.062641] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:41.879 [2024-07-22 18:04:46.062647] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:41.879 [2024-07-22 18:04:46.062653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062660] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.879 [2024-07-22 18:04:46.062666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:46.062676] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.879 [2024-07-22 18:04:46.062827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:46.062833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:46.062836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062840] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.879 [2024-07-22 18:04:46.062845] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:41.879 [2024-07-22 18:04:46.062853] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:41.879 [2024-07-22 18:04:46.062859] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.062865] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.879 [2024-07-22 18:04:46.062871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:46.062881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.879 [2024-07-22 18:04:46.063029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:46.063034] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:46.063037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.063041] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.879 [2024-07-22 18:04:46.063046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:41.879 [2024-07-22 18:04:46.063054] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.063058] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.063061] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.879 [2024-07-22 18:04:46.063067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.879 [2024-07-22 18:04:46.063076] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.879 [2024-07-22 18:04:46.063266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.879 [2024-07-22 18:04:46.063274] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.879 [2024-07-22 18:04:46.063277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.879 [2024-07-22 18:04:46.063281] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.879 [2024-07-22 18:04:46.063285] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:41.879 [2024-07-22 18:04:46.063290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:41.879 [2024-07-22 18:04:46.063297] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:41.880 [2024-07-22 18:04:46.063402] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:41.880 [2024-07-22 18:04:46.063406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:41.880 [2024-07-22 18:04:46.063413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.063425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.880 [2024-07-22 18:04:46.063436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.880 [2024-07-22 18:04:46.063653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.880 [2024-07-22 18:04:46.063659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.880 [2024-07-22 18:04:46.063662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.880 [2024-07-22 18:04:46.063671] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:41.880 [2024-07-22 18:04:46.063679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.063692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.880 [2024-07-22 18:04:46.063701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.880 [2024-07-22 18:04:46.063846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.880 [2024-07-22 18:04:46.063852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.880 [2024-07-22 18:04:46.063855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.880 [2024-07-22 18:04:46.063863] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:41.880 [2024-07-22 18:04:46.063867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:41.880 [2024-07-22 18:04:46.063875] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:41.880 [2024-07-22 18:04:46.063882] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:41.880 [2024-07-22 18:04:46.063889] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.063897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.063904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.880 [2024-07-22 18:04:46.063913] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.880 [2024-07-22 18:04:46.064114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.880 [2024-07-22 18:04:46.064121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.880 [2024-07-22 18:04:46.064124] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064127] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=4096, cccid=0 00:27:41.880 [2024-07-22 18:04:46.064131] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5730) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=4096 00:27:41.880 [2024-07-22 18:04:46.064139] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064142] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.880 [2024-07-22 18:04:46.064273] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.880 [2024-07-22 18:04:46.064276] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064280] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.880 [2024-07-22 18:04:46.064287] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:41.880 [2024-07-22 18:04:46.064291] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:41.880 [2024-07-22 18:04:46.064295] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:41.880 [2024-07-22 18:04:46.064299] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:41.880 [2024-07-22 18:04:46.064303] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:41.880 [2024-07-22 18:04:46.064307] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:41.880 [2024-07-22 18:04:46.064317] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:41.880 [2024-07-22 18:04:46.064323] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.064337] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.880 [2024-07-22 18:04:46.064346] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.880 [2024-07-22 18:04:46.064560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.880 [2024-07-22 18:04:46.064566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.880 [2024-07-22 18:04:46.064569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064572] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5730) on tqpair=0x1d6d9e0 00:27:41.880 [2024-07-22 18:04:46.064579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.064592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.880 [2024-07-22 18:04:46.064599] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064606] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.064612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.880 [2024-07-22 18:04:46.064617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064624] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.064629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.880 [2024-07-22 18:04:46.064635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064638] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064641] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.064646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.880 [2024-07-22 18:04:46.064651] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:41.880 [2024-07-22 18:04:46.064660] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:41.880 [2024-07-22 18:04:46.064666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.880 [2024-07-22 18:04:46.064672] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.880 [2024-07-22 18:04:46.064678] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.880 [2024-07-22 18:04:46.064689] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5730, cid 0, qid 0 00:27:41.880 [2024-07-22 18:04:46.064694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5890, cid 1, qid 0 00:27:41.880 [2024-07-22 18:04:46.064698] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd59f0, cid 2, qid 0 00:27:41.880 [2024-07-22 18:04:46.064703] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.880 [2024-07-22 18:04:46.064707] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.880 [2024-07-22 18:04:46.064924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.881 [2024-07-22 18:04:46.064930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.881 [2024-07-22 18:04:46.064933] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.064937] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.881 [2024-07-22 18:04:46.064942] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:41.881 [2024-07-22 18:04:46.064946] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.064953] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.064960] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.064966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.064971] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.064974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.881 [2024-07-22 18:04:46.064980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.881 [2024-07-22 18:04:46.064990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.881 [2024-07-22 18:04:46.065175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.881 [2024-07-22 18:04:46.065181] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.881 [2024-07-22 18:04:46.065184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065187] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.881 [2024-07-22 18:04:46.065245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.065254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.065261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.881 [2024-07-22 18:04:46.065273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.881 [2024-07-22 18:04:46.065283] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.881 [2024-07-22 18:04:46.065502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.881 [2024-07-22 18:04:46.065508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.881 [2024-07-22 18:04:46.065512] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065515] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=4096, cccid=4 00:27:41.881 [2024-07-22 18:04:46.065519] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5cb0) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=4096 00:27:41.881 [2024-07-22 18:04:46.065552] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065556] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065729] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.881 [2024-07-22 18:04:46.065735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.881 [2024-07-22 18:04:46.065738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065742] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.881 [2024-07-22 18:04:46.065752] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:41.881 [2024-07-22 18:04:46.065763] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.065771] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.065777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.065784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.881 [2024-07-22 18:04:46.065790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.881 [2024-07-22 18:04:46.065800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.881 [2024-07-22 18:04:46.065994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.881 [2024-07-22 18:04:46.066000] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.881 [2024-07-22 18:04:46.066003] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.066006] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=4096, cccid=4 00:27:41.881 [2024-07-22 18:04:46.066010] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5cb0) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=4096 00:27:41.881 [2024-07-22 18:04:46.066033] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.066037] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.066184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.881 [2024-07-22 18:04:46.066190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.881 [2024-07-22 18:04:46.066193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.066196] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.881 [2024-07-22 18:04:46.066209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.066217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.066223] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.066227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.066230] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.881 [2024-07-22 18:04:46.066236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.881 [2024-07-22 18:04:46.066246] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.881 [2024-07-22 18:04:46.070361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.881 [2024-07-22 18:04:46.070369] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.881 [2024-07-22 18:04:46.070372] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070375] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=4096, cccid=4 00:27:41.881 [2024-07-22 18:04:46.070379] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5cb0) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=4096 00:27:41.881 [2024-07-22 18:04:46.070385] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070389] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070394] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.881 [2024-07-22 18:04:46.070399] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.881 [2024-07-22 18:04:46.070402] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.881 [2024-07-22 18:04:46.070413] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.070420] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.070428] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.070434] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.070440] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.070445] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:41.881 [2024-07-22 18:04:46.070449] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:41.881 [2024-07-22 18:04:46.070453] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:41.881 [2024-07-22 18:04:46.070465] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.881 [2024-07-22 18:04:46.070478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.881 [2024-07-22 18:04:46.070484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070490] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6d9e0) 00:27:41.881 [2024-07-22 18:04:46.070496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.881 [2024-07-22 18:04:46.070509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.881 [2024-07-22 18:04:46.070513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5e10, cid 5, qid 0 00:27:41.881 [2024-07-22 18:04:46.070710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.881 [2024-07-22 18:04:46.070716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.881 [2024-07-22 18:04:46.070719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.881 [2024-07-22 18:04:46.070723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.070729] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.070735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.070738] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.070741] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5e10) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.070751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.070754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.070757] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.070763] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.070772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5e10, cid 5, qid 0 00:27:41.882 [2024-07-22 18:04:46.070933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.070939] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.070942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.070946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5e10) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.070955] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.070958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.070961] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.070967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.070978] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5e10, cid 5, qid 0 00:27:41.882 [2024-07-22 18:04:46.071183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.071189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.071192] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5e10) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.071204] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071208] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.071217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.071225] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5e10, cid 5, qid 0 00:27:41.882 [2024-07-22 18:04:46.071385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.071391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.071394] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071398] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5e10) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.071409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.071422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.071428] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.071441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.071447] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.071459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.071466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071472] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d6d9e0) 00:27:41.882 [2024-07-22 18:04:46.071478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.882 [2024-07-22 18:04:46.071488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5e10, cid 5, qid 0 00:27:41.882 [2024-07-22 18:04:46.071493] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5cb0, cid 4, qid 0 00:27:41.882 [2024-07-22 18:04:46.071498] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5f70, cid 6, qid 0 00:27:41.882 [2024-07-22 18:04:46.071502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd60d0, cid 7, qid 0 00:27:41.882 [2024-07-22 18:04:46.071709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.882 [2024-07-22 18:04:46.071715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.882 [2024-07-22 18:04:46.071718] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071721] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=8192, cccid=5 00:27:41.882 [2024-07-22 18:04:46.071726] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5e10) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=8192 00:27:41.882 [2024-07-22 18:04:46.071811] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071815] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.882 [2024-07-22 18:04:46.071825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.882 [2024-07-22 18:04:46.071829] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071832] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=512, cccid=4 00:27:41.882 [2024-07-22 18:04:46.071836] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5cb0) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=512 00:27:41.882 [2024-07-22 18:04:46.071843] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071846] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.882 [2024-07-22 18:04:46.071856] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.882 [2024-07-22 18:04:46.071859] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071862] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=512, cccid=6 00:27:41.882 [2024-07-22 18:04:46.071866] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd5f70) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=512 00:27:41.882 [2024-07-22 18:04:46.071873] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071876] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.882 [2024-07-22 18:04:46.071887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.882 [2024-07-22 18:04:46.071890] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071893] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d6d9e0): datao=0, datal=4096, cccid=7 00:27:41.882 [2024-07-22 18:04:46.071897] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd60d0) on tqpair(0x1d6d9e0): expected_datao=0, payload_size=4096 00:27:41.882 [2024-07-22 18:04:46.071914] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.071917] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.112538] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.112547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.112550] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.112554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5e10) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.112567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.112573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.112576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.112579] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5cb0) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.112588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.112594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.112599] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.112602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5f70) on tqpair=0x1d6d9e0 00:27:41.882 [2024-07-22 18:04:46.112610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.882 [2024-07-22 18:04:46.112615] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.882 [2024-07-22 18:04:46.112618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.882 [2024-07-22 18:04:46.112621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd60d0) on tqpair=0x1d6d9e0 00:27:41.882 ===================================================== 00:27:41.882 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.882 ===================================================== 00:27:41.882 Controller Capabilities/Features 00:27:41.882 ================================ 00:27:41.882 Vendor ID: 8086 00:27:41.882 Subsystem Vendor ID: 8086 00:27:41.882 Serial Number: SPDK00000000000001 00:27:41.882 Model Number: SPDK bdev Controller 00:27:41.883 Firmware Version: 24.01.1 00:27:41.883 Recommended Arb Burst: 6 00:27:41.883 IEEE OUI Identifier: e4 d2 5c 00:27:41.883 Multi-path I/O 00:27:41.883 May have multiple subsystem ports: Yes 00:27:41.883 May have multiple controllers: Yes 00:27:41.883 Associated with SR-IOV VF: No 00:27:41.883 Max Data Transfer Size: 131072 00:27:41.883 Max Number of Namespaces: 32 00:27:41.883 Max Number of I/O Queues: 127 00:27:41.883 NVMe Specification Version (VS): 1.3 00:27:41.883 NVMe Specification Version (Identify): 1.3 00:27:41.883 Maximum Queue Entries: 128 00:27:41.883 Contiguous Queues Required: Yes 00:27:41.883 Arbitration Mechanisms Supported 00:27:41.883 Weighted Round Robin: Not Supported 00:27:41.883 Vendor Specific: Not Supported 00:27:41.883 Reset Timeout: 15000 ms 00:27:41.883 Doorbell Stride: 4 bytes 00:27:41.883 NVM Subsystem Reset: Not Supported 00:27:41.883 Command Sets Supported 00:27:41.883 NVM Command Set: Supported 00:27:41.883 Boot Partition: Not Supported 00:27:41.883 Memory Page Size Minimum: 4096 bytes 00:27:41.883 Memory Page Size Maximum: 4096 bytes 00:27:41.883 Persistent Memory Region: Not Supported 00:27:41.883 Optional Asynchronous Events Supported 00:27:41.883 Namespace Attribute Notices: Supported 00:27:41.883 Firmware Activation Notices: Not Supported 00:27:41.883 ANA Change Notices: Not Supported 00:27:41.883 PLE Aggregate Log Change Notices: Not Supported 00:27:41.883 LBA Status Info Alert Notices: Not Supported 00:27:41.883 EGE Aggregate Log Change Notices: Not Supported 00:27:41.883 Normal NVM Subsystem Shutdown event: Not Supported 00:27:41.883 Zone Descriptor Change Notices: Not Supported 00:27:41.883 Discovery Log Change Notices: Not Supported 00:27:41.883 Controller Attributes 00:27:41.883 128-bit Host Identifier: Supported 00:27:41.883 Non-Operational Permissive Mode: Not Supported 00:27:41.883 NVM Sets: Not Supported 00:27:41.883 Read Recovery Levels: Not Supported 00:27:41.883 Endurance Groups: Not Supported 00:27:41.883 Predictable Latency Mode: Not Supported 00:27:41.883 Traffic Based Keep ALive: Not Supported 00:27:41.883 Namespace Granularity: Not Supported 00:27:41.883 SQ Associations: Not Supported 00:27:41.883 UUID List: Not Supported 00:27:41.883 Multi-Domain Subsystem: Not Supported 00:27:41.883 Fixed Capacity Management: Not Supported 00:27:41.883 Variable Capacity Management: Not Supported 00:27:41.883 Delete Endurance Group: Not Supported 00:27:41.883 Delete NVM Set: Not Supported 00:27:41.883 Extended LBA Formats Supported: Not Supported 00:27:41.883 Flexible Data Placement Supported: Not Supported 00:27:41.883 00:27:41.883 Controller Memory Buffer Support 00:27:41.883 ================================ 00:27:41.883 Supported: No 00:27:41.883 00:27:41.883 Persistent Memory Region Support 00:27:41.883 ================================ 00:27:41.883 Supported: No 00:27:41.883 00:27:41.883 Admin Command Set Attributes 00:27:41.883 ============================ 00:27:41.883 Security Send/Receive: Not Supported 00:27:41.883 Format NVM: Not Supported 00:27:41.883 Firmware Activate/Download: Not Supported 00:27:41.883 Namespace Management: Not Supported 00:27:41.883 Device Self-Test: Not Supported 00:27:41.883 Directives: Not Supported 00:27:41.883 NVMe-MI: Not Supported 00:27:41.883 Virtualization Management: Not Supported 00:27:41.883 Doorbell Buffer Config: Not Supported 00:27:41.883 Get LBA Status Capability: Not Supported 00:27:41.883 Command & Feature Lockdown Capability: Not Supported 00:27:41.883 Abort Command Limit: 4 00:27:41.883 Async Event Request Limit: 4 00:27:41.883 Number of Firmware Slots: N/A 00:27:41.883 Firmware Slot 1 Read-Only: N/A 00:27:41.883 Firmware Activation Without Reset: N/A 00:27:41.883 Multiple Update Detection Support: N/A 00:27:41.883 Firmware Update Granularity: No Information Provided 00:27:41.883 Per-Namespace SMART Log: No 00:27:41.883 Asymmetric Namespace Access Log Page: Not Supported 00:27:41.883 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:41.883 Command Effects Log Page: Supported 00:27:41.883 Get Log Page Extended Data: Supported 00:27:41.883 Telemetry Log Pages: Not Supported 00:27:41.883 Persistent Event Log Pages: Not Supported 00:27:41.883 Supported Log Pages Log Page: May Support 00:27:41.883 Commands Supported & Effects Log Page: Not Supported 00:27:41.883 Feature Identifiers & Effects Log Page:May Support 00:27:41.883 NVMe-MI Commands & Effects Log Page: May Support 00:27:41.883 Data Area 4 for Telemetry Log: Not Supported 00:27:41.883 Error Log Page Entries Supported: 128 00:27:41.883 Keep Alive: Supported 00:27:41.883 Keep Alive Granularity: 10000 ms 00:27:41.883 00:27:41.883 NVM Command Set Attributes 00:27:41.883 ========================== 00:27:41.883 Submission Queue Entry Size 00:27:41.883 Max: 64 00:27:41.883 Min: 64 00:27:41.883 Completion Queue Entry Size 00:27:41.883 Max: 16 00:27:41.883 Min: 16 00:27:41.883 Number of Namespaces: 32 00:27:41.883 Compare Command: Supported 00:27:41.883 Write Uncorrectable Command: Not Supported 00:27:41.883 Dataset Management Command: Supported 00:27:41.883 Write Zeroes Command: Supported 00:27:41.883 Set Features Save Field: Not Supported 00:27:41.883 Reservations: Supported 00:27:41.883 Timestamp: Not Supported 00:27:41.883 Copy: Supported 00:27:41.883 Volatile Write Cache: Present 00:27:41.883 Atomic Write Unit (Normal): 1 00:27:41.883 Atomic Write Unit (PFail): 1 00:27:41.883 Atomic Compare & Write Unit: 1 00:27:41.883 Fused Compare & Write: Supported 00:27:41.883 Scatter-Gather List 00:27:41.883 SGL Command Set: Supported 00:27:41.883 SGL Keyed: Supported 00:27:41.883 SGL Bit Bucket Descriptor: Not Supported 00:27:41.883 SGL Metadata Pointer: Not Supported 00:27:41.883 Oversized SGL: Not Supported 00:27:41.883 SGL Metadata Address: Not Supported 00:27:41.883 SGL Offset: Supported 00:27:41.883 Transport SGL Data Block: Not Supported 00:27:41.883 Replay Protected Memory Block: Not Supported 00:27:41.883 00:27:41.883 Firmware Slot Information 00:27:41.883 ========================= 00:27:41.883 Active slot: 1 00:27:41.883 Slot 1 Firmware Revision: 24.01.1 00:27:41.883 00:27:41.883 00:27:41.883 Commands Supported and Effects 00:27:41.883 ============================== 00:27:41.883 Admin Commands 00:27:41.883 -------------- 00:27:41.883 Get Log Page (02h): Supported 00:27:41.883 Identify (06h): Supported 00:27:41.883 Abort (08h): Supported 00:27:41.883 Set Features (09h): Supported 00:27:41.883 Get Features (0Ah): Supported 00:27:41.883 Asynchronous Event Request (0Ch): Supported 00:27:41.883 Keep Alive (18h): Supported 00:27:41.883 I/O Commands 00:27:41.883 ------------ 00:27:41.883 Flush (00h): Supported LBA-Change 00:27:41.883 Write (01h): Supported LBA-Change 00:27:41.883 Read (02h): Supported 00:27:41.883 Compare (05h): Supported 00:27:41.883 Write Zeroes (08h): Supported LBA-Change 00:27:41.883 Dataset Management (09h): Supported LBA-Change 00:27:41.883 Copy (19h): Supported LBA-Change 00:27:41.883 Unknown (79h): Supported LBA-Change 00:27:41.883 Unknown (7Ah): Supported 00:27:41.883 00:27:41.883 Error Log 00:27:41.883 ========= 00:27:41.883 00:27:41.883 Arbitration 00:27:41.883 =========== 00:27:41.883 Arbitration Burst: 1 00:27:41.883 00:27:41.883 Power Management 00:27:41.883 ================ 00:27:41.883 Number of Power States: 1 00:27:41.883 Current Power State: Power State #0 00:27:41.883 Power State #0: 00:27:41.883 Max Power: 0.00 W 00:27:41.883 Non-Operational State: Operational 00:27:41.883 Entry Latency: Not Reported 00:27:41.883 Exit Latency: Not Reported 00:27:41.883 Relative Read Throughput: 0 00:27:41.883 Relative Read Latency: 0 00:27:41.883 Relative Write Throughput: 0 00:27:41.883 Relative Write Latency: 0 00:27:41.883 Idle Power: Not Reported 00:27:41.883 Active Power: Not Reported 00:27:41.883 Non-Operational Permissive Mode: Not Supported 00:27:41.883 00:27:41.883 Health Information 00:27:41.883 ================== 00:27:41.883 Critical Warnings: 00:27:41.883 Available Spare Space: OK 00:27:41.883 Temperature: OK 00:27:41.883 Device Reliability: OK 00:27:41.883 Read Only: No 00:27:41.883 Volatile Memory Backup: OK 00:27:41.883 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:41.884 Temperature Threshold: [2024-07-22 18:04:46.112720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.112725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.112728] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.112735] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.112746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd60d0, cid 7, qid 0 00:27:41.884 [2024-07-22 18:04:46.113016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.113022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.113025] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113029] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd60d0) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.113055] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:41.884 [2024-07-22 18:04:46.113065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.884 [2024-07-22 18:04:46.113071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.884 [2024-07-22 18:04:46.113077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.884 [2024-07-22 18:04:46.113083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.884 [2024-07-22 18:04:46.113090] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113094] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.113103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.113114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.884 [2024-07-22 18:04:46.113279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.113285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.113288] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113292] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5b50) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.113299] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113303] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113306] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.113312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.113324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.884 [2024-07-22 18:04:46.113517] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.113524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.113527] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113531] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5b50) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.113536] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:41.884 [2024-07-22 18:04:46.113543] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:41.884 [2024-07-22 18:04:46.113554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113558] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113561] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.113567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.113576] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.884 [2024-07-22 18:04:46.113770] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.113776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.113779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5b50) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.113792] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.113800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.113806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.113815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.884 [2024-07-22 18:04:46.114023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.114029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.114032] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.114035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5b50) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.114044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.114048] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.114051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.114057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.114066] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.884 [2024-07-22 18:04:46.114254] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.114260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.114263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.114266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5b50) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.114276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.114280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.114283] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d6d9e0) 00:27:41.884 [2024-07-22 18:04:46.114289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.884 [2024-07-22 18:04:46.114300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd5b50, cid 3, qid 0 00:27:41.884 [2024-07-22 18:04:46.118357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.884 [2024-07-22 18:04:46.118364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.884 [2024-07-22 18:04:46.118368] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.884 [2024-07-22 18:04:46.118371] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dd5b50) on tqpair=0x1d6d9e0 00:27:41.884 [2024-07-22 18:04:46.118379] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:27:41.884 0 Kelvin (-273 Celsius) 00:27:41.884 Available Spare: 0% 00:27:41.884 Available Spare Threshold: 0% 00:27:41.884 Life Percentage Used: 0% 00:27:41.884 Data Units Read: 0 00:27:41.884 Data Units Written: 0 00:27:41.884 Host Read Commands: 0 00:27:41.884 Host Write Commands: 0 00:27:41.884 Controller Busy Time: 0 minutes 00:27:41.884 Power Cycles: 0 00:27:41.884 Power On Hours: 0 hours 00:27:41.884 Unsafe Shutdowns: 0 00:27:41.884 Unrecoverable Media Errors: 0 00:27:41.884 Lifetime Error Log Entries: 0 00:27:41.884 Warning Temperature Time: 0 minutes 00:27:41.884 Critical Temperature Time: 0 minutes 00:27:41.884 00:27:41.884 Number of Queues 00:27:41.884 ================ 00:27:41.884 Number of I/O Submission Queues: 127 00:27:41.884 Number of I/O Completion Queues: 127 00:27:41.884 00:27:41.884 Active Namespaces 00:27:41.884 ================= 00:27:41.884 Namespace ID:1 00:27:41.884 Error Recovery Timeout: Unlimited 00:27:41.884 Command Set Identifier: NVM (00h) 00:27:41.884 Deallocate: Supported 00:27:41.884 Deallocated/Unwritten Error: Not Supported 00:27:41.884 Deallocated Read Value: Unknown 00:27:41.884 Deallocate in Write Zeroes: Not Supported 00:27:41.884 Deallocated Guard Field: 0xFFFF 00:27:41.884 Flush: Supported 00:27:41.884 Reservation: Supported 00:27:41.884 Namespace Sharing Capabilities: Multiple Controllers 00:27:41.884 Size (in LBAs): 131072 (0GiB) 00:27:41.884 Capacity (in LBAs): 131072 (0GiB) 00:27:41.884 Utilization (in LBAs): 131072 (0GiB) 00:27:41.884 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:41.884 EUI64: ABCDEF0123456789 00:27:41.884 UUID: bbb3c770-362a-467d-bfd2-b8cfc1f22a57 00:27:41.884 Thin Provisioning: Not Supported 00:27:41.884 Per-NS Atomic Units: Yes 00:27:41.884 Atomic Boundary Size (Normal): 0 00:27:41.884 Atomic Boundary Size (PFail): 0 00:27:41.884 Atomic Boundary Offset: 0 00:27:41.884 Maximum Single Source Range Length: 65535 00:27:41.884 Maximum Copy Length: 65535 00:27:41.884 Maximum Source Range Count: 1 00:27:41.884 NGUID/EUI64 Never Reused: No 00:27:41.884 Namespace Write Protected: No 00:27:41.884 Number of LBA Formats: 1 00:27:41.884 Current LBA Format: LBA Format #00 00:27:41.884 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:41.884 00:27:41.884 18:04:46 -- host/identify.sh@51 -- # sync 00:27:41.884 18:04:46 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.885 18:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:41.885 18:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:42.145 18:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.145 18:04:46 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:42.145 18:04:46 -- host/identify.sh@56 -- # nvmftestfini 00:27:42.145 18:04:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:42.145 18:04:46 -- nvmf/common.sh@116 -- # sync 00:27:42.145 18:04:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:42.145 18:04:46 -- nvmf/common.sh@119 -- # set +e 00:27:42.145 18:04:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:42.145 18:04:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:42.145 rmmod nvme_tcp 00:27:42.145 rmmod nvme_fabrics 00:27:42.145 rmmod nvme_keyring 00:27:42.145 18:04:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:42.145 18:04:46 -- nvmf/common.sh@123 -- # set -e 00:27:42.145 18:04:46 -- nvmf/common.sh@124 -- # return 0 00:27:42.145 18:04:46 -- nvmf/common.sh@477 -- # '[' -n 1812114 ']' 00:27:42.145 18:04:46 -- nvmf/common.sh@478 -- # killprocess 1812114 00:27:42.145 18:04:46 -- common/autotest_common.sh@926 -- # '[' -z 1812114 ']' 00:27:42.145 18:04:46 -- common/autotest_common.sh@930 -- # kill -0 1812114 00:27:42.145 18:04:46 -- common/autotest_common.sh@931 -- # uname 00:27:42.145 18:04:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:42.145 18:04:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1812114 00:27:42.146 18:04:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:42.146 18:04:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:42.146 18:04:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1812114' 00:27:42.146 killing process with pid 1812114 00:27:42.146 18:04:46 -- common/autotest_common.sh@945 -- # kill 1812114 00:27:42.146 [2024-07-22 18:04:46.269671] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:42.146 18:04:46 -- common/autotest_common.sh@950 -- # wait 1812114 00:27:42.146 18:04:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:42.146 18:04:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:42.146 18:04:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:42.146 18:04:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.146 18:04:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:42.146 18:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.146 18:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.146 18:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.744 18:04:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:44.744 00:27:44.744 real 0m12.227s 00:27:44.744 user 0m8.486s 00:27:44.744 sys 0m6.559s 00:27:44.744 18:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.744 18:04:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.744 ************************************ 00:27:44.744 END TEST nvmf_identify 00:27:44.744 ************************************ 00:27:44.744 18:04:48 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:44.744 18:04:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:44.744 18:04:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.744 18:04:48 -- common/autotest_common.sh@10 -- # set +x 00:27:44.744 ************************************ 00:27:44.744 START TEST nvmf_perf 00:27:44.744 ************************************ 00:27:44.744 18:04:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:44.744 * Looking for test storage... 00:27:44.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.744 18:04:48 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.744 18:04:48 -- nvmf/common.sh@7 -- # uname -s 00:27:44.744 18:04:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.744 18:04:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.744 18:04:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.744 18:04:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.744 18:04:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.744 18:04:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.744 18:04:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.744 18:04:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.744 18:04:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.744 18:04:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.744 18:04:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:44.744 18:04:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:27:44.744 18:04:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.744 18:04:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.744 18:04:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.744 18:04:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.744 18:04:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.744 18:04:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.744 18:04:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.744 18:04:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.744 18:04:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.745 18:04:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.745 18:04:48 -- paths/export.sh@5 -- # export PATH 00:27:44.745 18:04:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.745 18:04:48 -- nvmf/common.sh@46 -- # : 0 00:27:44.745 18:04:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:44.745 18:04:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:44.745 18:04:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:44.745 18:04:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.745 18:04:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.745 18:04:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:44.745 18:04:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:44.745 18:04:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:44.745 18:04:48 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:44.745 18:04:48 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:44.745 18:04:48 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.745 18:04:48 -- host/perf.sh@17 -- # nvmftestinit 00:27:44.745 18:04:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:44.745 18:04:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.745 18:04:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:44.745 18:04:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:44.745 18:04:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:44.745 18:04:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.745 18:04:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.745 18:04:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.745 18:04:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:44.745 18:04:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:44.745 18:04:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:44.745 18:04:48 -- common/autotest_common.sh@10 -- # set +x 00:27:52.885 18:04:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:52.885 18:04:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:52.885 18:04:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:52.885 18:04:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:52.885 18:04:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:52.885 18:04:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:52.885 18:04:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:52.885 18:04:56 -- nvmf/common.sh@294 -- # net_devs=() 00:27:52.885 18:04:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:52.885 18:04:56 -- nvmf/common.sh@295 -- # e810=() 00:27:52.885 18:04:56 -- nvmf/common.sh@295 -- # local -ga e810 00:27:52.885 18:04:56 -- nvmf/common.sh@296 -- # x722=() 00:27:52.885 18:04:56 -- nvmf/common.sh@296 -- # local -ga x722 00:27:52.885 18:04:56 -- nvmf/common.sh@297 -- # mlx=() 00:27:52.885 18:04:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:52.885 18:04:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.885 18:04:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:52.885 18:04:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:52.885 18:04:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:52.885 18:04:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.885 18:04:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:52.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:52.885 18:04:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.885 18:04:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:52.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:52.885 18:04:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:52.885 18:04:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:52.885 18:04:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.885 18:04:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.885 18:04:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.885 18:04:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.885 18:04:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:52.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:52.885 18:04:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.885 18:04:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.885 18:04:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.885 18:04:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.885 18:04:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.885 18:04:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:52.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:52.885 18:04:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.885 18:04:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:52.885 18:04:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:52.885 18:04:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:52.886 18:04:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:52.886 18:04:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:52.886 18:04:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.886 18:04:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.886 18:04:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.886 18:04:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:52.886 18:04:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.886 18:04:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.886 18:04:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:52.886 18:04:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.886 18:04:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.886 18:04:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:52.886 18:04:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:52.886 18:04:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.886 18:04:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.886 18:04:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.886 18:04:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.886 18:04:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:52.886 18:04:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.886 18:04:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.886 18:04:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.886 18:04:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:52.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:27:52.886 00:27:52.886 --- 10.0.0.2 ping statistics --- 00:27:52.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.886 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:27:52.886 18:04:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:27:52.886 00:27:52.886 --- 10.0.0.1 ping statistics --- 00:27:52.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.886 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:27:52.886 18:04:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.886 18:04:56 -- nvmf/common.sh@410 -- # return 0 00:27:52.886 18:04:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:52.886 18:04:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.886 18:04:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:52.886 18:04:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:52.886 18:04:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.886 18:04:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:52.886 18:04:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:52.886 18:04:56 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:52.886 18:04:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:52.886 18:04:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:52.886 18:04:56 -- common/autotest_common.sh@10 -- # set +x 00:27:52.886 18:04:56 -- nvmf/common.sh@469 -- # nvmfpid=1816715 00:27:52.886 18:04:56 -- nvmf/common.sh@470 -- # waitforlisten 1816715 00:27:52.886 18:04:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:52.886 18:04:56 -- common/autotest_common.sh@819 -- # '[' -z 1816715 ']' 00:27:52.886 18:04:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.886 18:04:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:52.886 18:04:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.886 18:04:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:52.886 18:04:56 -- common/autotest_common.sh@10 -- # set +x 00:27:52.886 [2024-07-22 18:04:56.694327] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:52.886 [2024-07-22 18:04:56.694405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.886 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.886 [2024-07-22 18:04:56.789237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.886 [2024-07-22 18:04:56.879296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:52.886 [2024-07-22 18:04:56.879459] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.886 [2024-07-22 18:04:56.879475] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.886 [2024-07-22 18:04:56.879482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.886 [2024-07-22 18:04:56.879620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.886 [2024-07-22 18:04:56.879745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.886 [2024-07-22 18:04:56.879874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.886 [2024-07-22 18:04:56.879877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.456 18:04:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:53.456 18:04:57 -- common/autotest_common.sh@852 -- # return 0 00:27:53.456 18:04:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:53.456 18:04:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:53.456 18:04:57 -- common/autotest_common.sh@10 -- # set +x 00:27:53.456 18:04:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.456 18:04:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:53.456 18:04:57 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:56.751 18:05:00 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:56.751 18:05:00 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:56.751 18:05:00 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:27:56.751 18:05:00 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:57.012 18:05:01 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:57.012 18:05:01 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:27:57.012 18:05:01 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:57.012 18:05:01 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:57.012 18:05:01 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:57.012 [2024-07-22 18:05:01.237518] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.012 18:05:01 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:57.272 18:05:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:57.272 18:05:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:57.531 18:05:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:57.531 18:05:01 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:57.792 18:05:01 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.792 [2024-07-22 18:05:02.025052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.792 18:05:02 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.052 18:05:02 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:27:58.052 18:05:02 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:58.052 18:05:02 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:58.052 18:05:02 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:27:59.434 Initializing NVMe Controllers 00:27:59.434 Attached to NVMe Controller at 0000:65:00.0 [8086:0a54] 00:27:59.434 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:27:59.434 Initialization complete. Launching workers. 00:27:59.434 ======================================================== 00:27:59.434 Latency(us) 00:27:59.434 Device Information : IOPS MiB/s Average min max 00:27:59.434 PCIE (0000:65:00.0) NSID 1 from core 0: 86839.53 339.22 367.94 44.16 6217.72 00:27:59.434 ======================================================== 00:27:59.434 Total : 86839.53 339.22 367.94 44.16 6217.72 00:27:59.434 00:27:59.434 18:05:03 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.434 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.817 Initializing NVMe Controllers 00:28:00.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:00.818 Initialization complete. Launching workers. 00:28:00.818 ======================================================== 00:28:00.818 Latency(us) 00:28:00.818 Device Information : IOPS MiB/s Average min max 00:28:00.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 69.00 0.27 14675.23 229.81 45626.34 00:28:00.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16332.17 7964.39 47904.81 00:28:00.818 ======================================================== 00:28:00.818 Total : 131.00 0.51 15459.43 229.81 47904.81 00:28:00.818 00:28:00.818 18:05:04 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:00.818 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.760 Initializing NVMe Controllers 00:28:01.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:01.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:01.760 Initialization complete. Launching workers. 00:28:01.760 ======================================================== 00:28:01.760 Latency(us) 00:28:01.760 Device Information : IOPS MiB/s Average min max 00:28:01.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9350.00 36.52 3423.49 431.67 6649.03 00:28:01.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3877.00 15.14 8298.92 7210.30 15947.71 00:28:01.760 ======================================================== 00:28:01.760 Total : 13227.00 51.67 4852.54 431.67 15947.71 00:28:01.760 00:28:01.760 18:05:05 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:01.760 18:05:05 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:01.760 18:05:05 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:01.760 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.305 Initializing NVMe Controllers 00:28:04.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.305 Controller IO queue size 128, less than required. 00:28:04.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:04.305 Controller IO queue size 128, less than required. 00:28:04.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:04.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:04.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:04.305 Initialization complete. Launching workers. 00:28:04.305 ======================================================== 00:28:04.305 Latency(us) 00:28:04.305 Device Information : IOPS MiB/s Average min max 00:28:04.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1597.79 399.45 81135.97 56508.09 119192.08 00:28:04.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.55 149.39 221304.96 97231.76 337503.98 00:28:04.305 ======================================================== 00:28:04.305 Total : 2195.34 548.83 119288.48 56508.09 337503.98 00:28:04.305 00:28:04.305 18:05:08 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:04.305 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.565 No valid NVMe controllers or AIO or URING devices found 00:28:04.565 Initializing NVMe Controllers 00:28:04.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.565 Controller IO queue size 128, less than required. 00:28:04.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:04.565 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:04.565 Controller IO queue size 128, less than required. 00:28:04.565 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:04.565 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:04.565 WARNING: Some requested NVMe devices were skipped 00:28:04.565 18:05:08 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:04.565 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.109 Initializing NVMe Controllers 00:28:07.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:07.109 Controller IO queue size 128, less than required. 00:28:07.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.109 Controller IO queue size 128, less than required. 00:28:07.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:07.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:07.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:07.109 Initialization complete. Launching workers. 00:28:07.109 00:28:07.109 ==================== 00:28:07.109 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:07.109 TCP transport: 00:28:07.109 polls: 26596 00:28:07.109 idle_polls: 14889 00:28:07.109 sock_completions: 11707 00:28:07.109 nvme_completions: 6627 00:28:07.109 submitted_requests: 10131 00:28:07.109 queued_requests: 1 00:28:07.109 00:28:07.109 ==================== 00:28:07.109 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:07.109 TCP transport: 00:28:07.109 polls: 23682 00:28:07.109 idle_polls: 10954 00:28:07.110 sock_completions: 12728 00:28:07.110 nvme_completions: 6225 00:28:07.110 submitted_requests: 9497 00:28:07.110 queued_requests: 1 00:28:07.110 ======================================================== 00:28:07.110 Latency(us) 00:28:07.110 Device Information : IOPS MiB/s Average min max 00:28:07.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1717.44 429.36 76466.88 39690.41 137010.57 00:28:07.110 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1616.62 404.16 80190.28 33111.00 116482.95 00:28:07.110 ======================================================== 00:28:07.110 Total : 3334.07 833.52 78272.29 33111.00 137010.57 00:28:07.110 00:28:07.110 18:05:11 -- host/perf.sh@66 -- # sync 00:28:07.110 18:05:11 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.370 18:05:11 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:07.370 18:05:11 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:28:07.370 18:05:11 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:13.953 18:05:18 -- host/perf.sh@72 -- # ls_guid=10093f06-0119-40bf-abea-eb59ce9a6df0 00:28:13.953 18:05:18 -- host/perf.sh@73 -- # get_lvs_free_mb 10093f06-0119-40bf-abea-eb59ce9a6df0 00:28:13.953 18:05:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=10093f06-0119-40bf-abea-eb59ce9a6df0 00:28:13.953 18:05:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:13.953 18:05:18 -- common/autotest_common.sh@1345 -- # local fc 00:28:13.953 18:05:18 -- common/autotest_common.sh@1346 -- # local cs 00:28:13.953 18:05:18 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:14.213 18:05:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:14.213 { 00:28:14.213 "uuid": "10093f06-0119-40bf-abea-eb59ce9a6df0", 00:28:14.213 "name": "lvs_0", 00:28:14.213 "base_bdev": "Nvme0n1", 00:28:14.213 "total_data_clusters": 476466, 00:28:14.213 "free_clusters": 476466, 00:28:14.213 "block_size": 512, 00:28:14.213 "cluster_size": 4194304 00:28:14.213 } 00:28:14.213 ]' 00:28:14.213 18:05:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="10093f06-0119-40bf-abea-eb59ce9a6df0") .free_clusters' 00:28:14.213 18:05:18 -- common/autotest_common.sh@1348 -- # fc=476466 00:28:14.213 18:05:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="10093f06-0119-40bf-abea-eb59ce9a6df0") .cluster_size' 00:28:14.473 18:05:18 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:14.473 18:05:18 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:28:14.473 18:05:18 -- common/autotest_common.sh@1353 -- # echo 1905864 00:28:14.473 1905864 00:28:14.473 18:05:18 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:28:14.473 18:05:18 -- host/perf.sh@78 -- # free_mb=20480 00:28:14.473 18:05:18 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10093f06-0119-40bf-abea-eb59ce9a6df0 lbd_0 20480 00:28:14.473 18:05:18 -- host/perf.sh@80 -- # lb_guid=ba18b8d4-dbba-4a15-905a-ab292e7e6e92 00:28:14.473 18:05:18 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore ba18b8d4-dbba-4a15-905a-ab292e7e6e92 lvs_n_0 00:28:15.855 18:05:19 -- host/perf.sh@83 -- # ls_nested_guid=e3afecf3-2eff-4e6b-84c2-f86726bc8cf1 00:28:15.855 18:05:19 -- host/perf.sh@84 -- # get_lvs_free_mb e3afecf3-2eff-4e6b-84c2-f86726bc8cf1 00:28:15.855 18:05:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e3afecf3-2eff-4e6b-84c2-f86726bc8cf1 00:28:15.855 18:05:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:15.855 18:05:19 -- common/autotest_common.sh@1345 -- # local fc 00:28:15.855 18:05:19 -- common/autotest_common.sh@1346 -- # local cs 00:28:15.855 18:05:19 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:15.855 18:05:20 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:15.855 { 00:28:15.855 "uuid": "10093f06-0119-40bf-abea-eb59ce9a6df0", 00:28:15.855 "name": "lvs_0", 00:28:15.855 "base_bdev": "Nvme0n1", 00:28:15.855 "total_data_clusters": 476466, 00:28:15.855 "free_clusters": 471346, 00:28:15.855 "block_size": 512, 00:28:15.855 "cluster_size": 4194304 00:28:15.855 }, 00:28:15.855 { 00:28:15.855 "uuid": "e3afecf3-2eff-4e6b-84c2-f86726bc8cf1", 00:28:15.855 "name": "lvs_n_0", 00:28:15.855 "base_bdev": "ba18b8d4-dbba-4a15-905a-ab292e7e6e92", 00:28:15.855 "total_data_clusters": 5114, 00:28:15.855 "free_clusters": 5114, 00:28:15.855 "block_size": 512, 00:28:15.855 "cluster_size": 4194304 00:28:15.855 } 00:28:15.855 ]' 00:28:15.855 18:05:20 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e3afecf3-2eff-4e6b-84c2-f86726bc8cf1") .free_clusters' 00:28:16.115 18:05:20 -- common/autotest_common.sh@1348 -- # fc=5114 00:28:16.115 18:05:20 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e3afecf3-2eff-4e6b-84c2-f86726bc8cf1") .cluster_size' 00:28:16.115 18:05:20 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:16.115 18:05:20 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:28:16.115 18:05:20 -- common/autotest_common.sh@1353 -- # echo 20456 00:28:16.115 20456 00:28:16.115 18:05:20 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:16.115 18:05:20 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e3afecf3-2eff-4e6b-84c2-f86726bc8cf1 lbd_nest_0 20456 00:28:16.376 18:05:20 -- host/perf.sh@88 -- # lb_nested_guid=e31f6160-93e8-4875-a979-f8202f2af12c 00:28:16.376 18:05:20 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.376 18:05:20 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:16.376 18:05:20 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e31f6160-93e8-4875-a979-f8202f2af12c 00:28:16.636 18:05:20 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.896 18:05:20 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:16.896 18:05:20 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:16.896 18:05:20 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:16.896 18:05:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:16.896 18:05:20 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.896 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.122 Initializing NVMe Controllers 00:28:29.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.122 Initialization complete. Launching workers. 00:28:29.122 ======================================================== 00:28:29.122 Latency(us) 00:28:29.122 Device Information : IOPS MiB/s Average min max 00:28:29.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.50 0.02 22526.48 256.04 46020.29 00:28:29.122 ======================================================== 00:28:29.122 Total : 44.50 0.02 22526.48 256.04 46020.29 00:28:29.122 00:28:29.122 18:05:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:29.122 18:05:31 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.122 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.220 Initializing NVMe Controllers 00:28:39.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.220 Initialization complete. Launching workers. 00:28:39.220 ======================================================== 00:28:39.220 Latency(us) 00:28:39.220 Device Information : IOPS MiB/s Average min max 00:28:39.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.60 7.70 16258.45 3992.28 50881.55 00:28:39.220 ======================================================== 00:28:39.220 Total : 61.60 7.70 16258.45 3992.28 50881.55 00:28:39.220 00:28:39.220 18:05:41 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:39.220 18:05:41 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:39.220 18:05:41 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:39.220 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.218 Initializing NVMe Controllers 00:28:49.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.218 Initialization complete. Launching workers. 00:28:49.218 ======================================================== 00:28:49.218 Latency(us) 00:28:49.218 Device Information : IOPS MiB/s Average min max 00:28:49.218 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8147.52 3.98 3927.37 382.80 9921.36 00:28:49.218 ======================================================== 00:28:49.218 Total : 8147.52 3.98 3927.37 382.80 9921.36 00:28:49.218 00:28:49.218 18:05:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:49.218 18:05:51 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.218 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.211 Initializing NVMe Controllers 00:28:59.211 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.211 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.211 Initialization complete. Launching workers. 00:28:59.211 ======================================================== 00:28:59.211 Latency(us) 00:28:59.211 Device Information : IOPS MiB/s Average min max 00:28:59.211 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4114.80 514.35 7776.66 584.49 20178.04 00:28:59.211 ======================================================== 00:28:59.211 Total : 4114.80 514.35 7776.66 584.49 20178.04 00:28:59.211 00:28:59.211 18:06:02 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:59.211 18:06:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:59.211 18:06:02 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.211 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.210 Initializing NVMe Controllers 00:29:09.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.210 Controller IO queue size 128, less than required. 00:29:09.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:09.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:09.210 Initialization complete. Launching workers. 00:29:09.210 ======================================================== 00:29:09.210 Latency(us) 00:29:09.210 Device Information : IOPS MiB/s Average min max 00:29:09.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14302.20 6.98 8955.03 1679.74 22800.72 00:29:09.210 ======================================================== 00:29:09.211 Total : 14302.20 6.98 8955.03 1679.74 22800.72 00:29:09.211 00:29:09.211 18:06:12 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:09.211 18:06:12 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:09.211 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.210 Initializing NVMe Controllers 00:29:19.210 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.210 Controller IO queue size 128, less than required. 00:29:19.210 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.210 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:19.210 Initialization complete. Launching workers. 00:29:19.210 ======================================================== 00:29:19.210 Latency(us) 00:29:19.210 Device Information : IOPS MiB/s Average min max 00:29:19.210 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1185.89 148.24 108038.59 23790.23 258335.90 00:29:19.210 ======================================================== 00:29:19.210 Total : 1185.89 148.24 108038.59 23790.23 258335.90 00:29:19.210 00:29:19.210 18:06:23 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.210 18:06:23 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e31f6160-93e8-4875-a979-f8202f2af12c 00:29:20.151 18:06:24 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:20.151 18:06:24 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ba18b8d4-dbba-4a15-905a-ab292e7e6e92 00:29:20.411 18:06:24 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:20.671 18:06:24 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:20.671 18:06:24 -- host/perf.sh@114 -- # nvmftestfini 00:29:20.671 18:06:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:20.671 18:06:24 -- nvmf/common.sh@116 -- # sync 00:29:20.671 18:06:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:20.671 18:06:24 -- nvmf/common.sh@119 -- # set +e 00:29:20.671 18:06:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:20.671 18:06:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:20.671 rmmod nvme_tcp 00:29:20.671 rmmod nvme_fabrics 00:29:20.671 rmmod nvme_keyring 00:29:20.671 18:06:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:20.671 18:06:24 -- nvmf/common.sh@123 -- # set -e 00:29:20.671 18:06:24 -- nvmf/common.sh@124 -- # return 0 00:29:20.671 18:06:24 -- nvmf/common.sh@477 -- # '[' -n 1816715 ']' 00:29:20.671 18:06:24 -- nvmf/common.sh@478 -- # killprocess 1816715 00:29:20.671 18:06:24 -- common/autotest_common.sh@926 -- # '[' -z 1816715 ']' 00:29:20.671 18:06:24 -- common/autotest_common.sh@930 -- # kill -0 1816715 00:29:20.671 18:06:24 -- common/autotest_common.sh@931 -- # uname 00:29:20.671 18:06:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:20.671 18:06:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1816715 00:29:20.671 18:06:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:20.671 18:06:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:20.671 18:06:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1816715' 00:29:20.671 killing process with pid 1816715 00:29:20.671 18:06:24 -- common/autotest_common.sh@945 -- # kill 1816715 00:29:20.671 18:06:24 -- common/autotest_common.sh@950 -- # wait 1816715 00:29:23.212 18:06:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:23.212 18:06:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:23.212 18:06:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:23.212 18:06:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.212 18:06:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:23.212 18:06:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.212 18:06:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.212 18:06:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.123 18:06:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:25.123 00:29:25.123 real 1m40.814s 00:29:25.123 user 5m58.767s 00:29:25.123 sys 0m14.802s 00:29:25.123 18:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.123 18:06:29 -- common/autotest_common.sh@10 -- # set +x 00:29:25.123 ************************************ 00:29:25.123 END TEST nvmf_perf 00:29:25.123 ************************************ 00:29:25.123 18:06:29 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:25.123 18:06:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:25.123 18:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.123 18:06:29 -- common/autotest_common.sh@10 -- # set +x 00:29:25.123 ************************************ 00:29:25.123 START TEST nvmf_fio_host 00:29:25.123 ************************************ 00:29:25.123 18:06:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:25.383 * Looking for test storage... 00:29:25.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.383 18:06:29 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.383 18:06:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.383 18:06:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.383 18:06:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.383 18:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.383 18:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- paths/export.sh@5 -- # export PATH 00:29:25.384 18:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.384 18:06:29 -- nvmf/common.sh@7 -- # uname -s 00:29:25.384 18:06:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.384 18:06:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.384 18:06:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.384 18:06:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.384 18:06:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.384 18:06:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.384 18:06:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.384 18:06:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.384 18:06:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.384 18:06:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.384 18:06:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:25.384 18:06:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:29:25.384 18:06:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.384 18:06:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.384 18:06:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.384 18:06:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.384 18:06:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.384 18:06:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.384 18:06:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.384 18:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- paths/export.sh@5 -- # export PATH 00:29:25.384 18:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.384 18:06:29 -- nvmf/common.sh@46 -- # : 0 00:29:25.384 18:06:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:25.384 18:06:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:25.384 18:06:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:25.384 18:06:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.384 18:06:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.384 18:06:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:25.384 18:06:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:25.384 18:06:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:25.384 18:06:29 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:25.384 18:06:29 -- host/fio.sh@14 -- # nvmftestinit 00:29:25.384 18:06:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:25.384 18:06:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.384 18:06:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:25.384 18:06:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:25.384 18:06:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:25.384 18:06:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.384 18:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.384 18:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.384 18:06:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:25.384 18:06:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:25.384 18:06:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:25.384 18:06:29 -- common/autotest_common.sh@10 -- # set +x 00:29:33.530 18:06:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:33.530 18:06:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:33.530 18:06:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:33.530 18:06:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:33.530 18:06:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:33.530 18:06:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:33.530 18:06:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:33.530 18:06:37 -- nvmf/common.sh@294 -- # net_devs=() 00:29:33.530 18:06:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:33.530 18:06:37 -- nvmf/common.sh@295 -- # e810=() 00:29:33.530 18:06:37 -- nvmf/common.sh@295 -- # local -ga e810 00:29:33.530 18:06:37 -- nvmf/common.sh@296 -- # x722=() 00:29:33.530 18:06:37 -- nvmf/common.sh@296 -- # local -ga x722 00:29:33.530 18:06:37 -- nvmf/common.sh@297 -- # mlx=() 00:29:33.530 18:06:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:33.530 18:06:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.530 18:06:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:33.530 18:06:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:33.530 18:06:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:33.530 18:06:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:33.530 18:06:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:33.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:33.530 18:06:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:33.530 18:06:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:33.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:33.530 18:06:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:33.530 18:06:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:33.530 18:06:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.530 18:06:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:33.530 18:06:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.530 18:06:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:33.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:33.530 18:06:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.530 18:06:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:33.530 18:06:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.530 18:06:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:33.530 18:06:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.530 18:06:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:33.530 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:33.530 18:06:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.530 18:06:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:33.530 18:06:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:33.530 18:06:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:33.530 18:06:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:33.530 18:06:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.530 18:06:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.530 18:06:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.530 18:06:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:33.530 18:06:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.530 18:06:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.530 18:06:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:33.530 18:06:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.530 18:06:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.530 18:06:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:33.530 18:06:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:33.530 18:06:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.530 18:06:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.530 18:06:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.530 18:06:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.530 18:06:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:33.530 18:06:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.530 18:06:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.530 18:06:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.530 18:06:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:33.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.433 ms 00:29:33.531 00:29:33.531 --- 10.0.0.2 ping statistics --- 00:29:33.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.531 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:29:33.531 18:06:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:29:33.531 00:29:33.531 --- 10.0.0.1 ping statistics --- 00:29:33.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.531 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:29:33.531 18:06:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.531 18:06:37 -- nvmf/common.sh@410 -- # return 0 00:29:33.531 18:06:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:33.531 18:06:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.531 18:06:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:33.531 18:06:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:33.531 18:06:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.531 18:06:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:33.531 18:06:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:33.531 18:06:37 -- host/fio.sh@16 -- # [[ y != y ]] 00:29:33.531 18:06:37 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:33.531 18:06:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:33.531 18:06:37 -- common/autotest_common.sh@10 -- # set +x 00:29:33.531 18:06:37 -- host/fio.sh@24 -- # nvmfpid=1836621 00:29:33.531 18:06:37 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.531 18:06:37 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.791 18:06:37 -- host/fio.sh@28 -- # waitforlisten 1836621 00:29:33.791 18:06:37 -- common/autotest_common.sh@819 -- # '[' -z 1836621 ']' 00:29:33.791 18:06:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.791 18:06:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:33.791 18:06:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.791 18:06:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:33.791 18:06:37 -- common/autotest_common.sh@10 -- # set +x 00:29:33.791 [2024-07-22 18:06:37.854372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:33.791 [2024-07-22 18:06:37.854433] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.791 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.791 [2024-07-22 18:06:37.947179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.791 [2024-07-22 18:06:38.038710] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:33.791 [2024-07-22 18:06:38.038863] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.791 [2024-07-22 18:06:38.038873] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.791 [2024-07-22 18:06:38.038880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.791 [2024-07-22 18:06:38.039017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.791 [2024-07-22 18:06:38.039139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.791 [2024-07-22 18:06:38.039268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.792 [2024-07-22 18:06:38.039271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.733 18:06:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:34.733 18:06:38 -- common/autotest_common.sh@852 -- # return 0 00:29:34.733 18:06:38 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:34.733 [2024-07-22 18:06:38.828794] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.733 18:06:38 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:34.733 18:06:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:34.733 18:06:38 -- common/autotest_common.sh@10 -- # set +x 00:29:34.733 18:06:38 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:34.994 Malloc1 00:29:34.994 18:06:39 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.254 18:06:39 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:35.254 18:06:39 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.514 [2024-07-22 18:06:39.687645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.514 18:06:39 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.810 18:06:39 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:35.810 18:06:39 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.810 18:06:39 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:35.810 18:06:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:35.810 18:06:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.810 18:06:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:35.810 18:06:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.810 18:06:39 -- common/autotest_common.sh@1320 -- # shift 00:29:35.810 18:06:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:35.810 18:06:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.810 18:06:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.810 18:06:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:35.810 18:06:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:35.810 18:06:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:35.810 18:06:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:35.811 18:06:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.811 18:06:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:35.811 18:06:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:35.811 18:06:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:35.811 18:06:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:35.811 18:06:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:35.811 18:06:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:35.811 18:06:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:36.099 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:36.099 fio-3.35 00:29:36.099 Starting 1 thread 00:29:36.099 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.632 00:29:38.632 test: (groupid=0, jobs=1): err= 0: pid=1837327: Mon Jul 22 18:06:42 2024 00:29:38.632 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(86.9MiB/2005msec) 00:29:38.632 slat (nsec): min=1889, max=274485, avg=2033.15, stdev=2625.26 00:29:38.632 clat (usec): min=3673, max=11011, avg=6370.24, stdev=456.87 00:29:38.632 lat (usec): min=3703, max=11013, avg=6372.28, stdev=456.82 00:29:38.632 clat percentiles (usec): 00:29:38.632 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5800], 20.00th=[ 5997], 00:29:38.632 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:29:38.632 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:29:38.632 | 99.00th=[ 7373], 99.50th=[ 7570], 99.90th=[ 9372], 99.95th=[10421], 00:29:38.632 | 99.99th=[10945] 00:29:38.632 bw ( KiB/s): min=43176, max=45080, per=99.89%, avg=44344.00, stdev=828.89, samples=4 00:29:38.632 iops : min=10794, max=11270, avg=11086.00, stdev=207.22, samples=4 00:29:38.632 write: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(86.6MiB/2005msec); 0 zone resets 00:29:38.632 slat (nsec): min=1945, max=270166, avg=2124.90, stdev=1993.67 00:29:38.632 clat (usec): min=2871, max=9657, avg=5096.88, stdev=381.54 00:29:38.632 lat (usec): min=2889, max=9659, avg=5099.00, stdev=381.57 00:29:38.632 clat percentiles (usec): 00:29:38.632 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4817], 00:29:38.632 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:29:38.632 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:29:38.632 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 7504], 99.95th=[ 8586], 00:29:38.632 | 99.99th=[ 9372] 00:29:38.632 bw ( KiB/s): min=43576, max=44968, per=100.00%, avg=44258.00, stdev=569.65, samples=4 00:29:38.632 iops : min=10894, max=11242, avg=11064.50, stdev=142.41, samples=4 00:29:38.632 lat (msec) : 4=0.23%, 10=99.74%, 20=0.03% 00:29:38.632 cpu : usr=71.16%, sys=27.59%, ctx=40, majf=0, minf=41 00:29:38.632 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:38.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:38.632 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:38.632 issued rwts: total=22251,22177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:38.632 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:38.632 00:29:38.632 Run status group 0 (all jobs): 00:29:38.632 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=86.9MiB (91.1MB), run=2005-2005msec 00:29:38.632 WRITE: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=86.6MiB (90.8MB), run=2005-2005msec 00:29:38.632 18:06:42 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:38.632 18:06:42 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:38.632 18:06:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:38.632 18:06:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.632 18:06:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:38.632 18:06:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.632 18:06:42 -- common/autotest_common.sh@1320 -- # shift 00:29:38.632 18:06:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:38.632 18:06:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:38.632 18:06:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:38.632 18:06:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:38.632 18:06:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:38.632 18:06:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:38.632 18:06:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:38.632 18:06:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:38.894 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:38.894 fio-3.35 00:29:38.894 Starting 1 thread 00:29:38.894 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.434 [2024-07-22 18:06:45.295709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3c0c0 is same with the state(5) to be set 00:29:41.435 00:29:41.435 test: (groupid=0, jobs=1): err= 0: pid=1837870: Mon Jul 22 18:06:45 2024 00:29:41.435 read: IOPS=10.1k, BW=158MiB/s (165MB/s)(316MiB/2006msec) 00:29:41.435 slat (usec): min=3, max=106, avg= 3.38, stdev= 1.66 00:29:41.435 clat (usec): min=1720, max=14678, avg=7602.70, stdev=1886.08 00:29:41.435 lat (usec): min=1723, max=14681, avg=7606.08, stdev=1886.30 00:29:41.435 clat percentiles (usec): 00:29:41.435 | 1.00th=[ 4015], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5866], 00:29:41.435 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 8029], 00:29:41.435 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[10683], 00:29:41.435 | 99.00th=[12125], 99.50th=[12911], 99.90th=[14091], 99.95th=[14484], 00:29:41.435 | 99.99th=[14615] 00:29:41.435 bw ( KiB/s): min=71680, max=95552, per=50.16%, avg=80952.00, stdev=10428.61, samples=4 00:29:41.435 iops : min= 4480, max= 5972, avg=5059.50, stdev=651.79, samples=4 00:29:41.435 write: IOPS=5897, BW=92.2MiB/s (96.6MB/s)(165MiB/1791msec); 0 zone resets 00:29:41.435 slat (usec): min=36, max=322, avg=37.96, stdev= 7.42 00:29:41.435 clat (usec): min=1982, max=15316, avg=8796.41, stdev=1363.16 00:29:41.435 lat (usec): min=2019, max=15453, avg=8834.38, stdev=1365.10 00:29:41.435 clat percentiles (usec): 00:29:41.435 | 1.00th=[ 6063], 5.00th=[ 6849], 10.00th=[ 7177], 20.00th=[ 7635], 00:29:41.435 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:29:41.435 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11076], 00:29:41.435 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14877], 99.95th=[15008], 00:29:41.435 | 99.99th=[15139] 00:29:41.435 bw ( KiB/s): min=74848, max=99296, per=88.97%, avg=83952.00, stdev=10805.09, samples=4 00:29:41.435 iops : min= 4678, max= 6206, avg=5247.00, stdev=675.32, samples=4 00:29:41.435 lat (msec) : 2=0.04%, 4=0.66%, 10=85.36%, 20=13.95% 00:29:41.435 cpu : usr=83.49%, sys=15.16%, ctx=23, majf=0, minf=64 00:29:41.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:29:41.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:41.435 issued rwts: total=20235,10563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:41.435 00:29:41.435 Run status group 0 (all jobs): 00:29:41.435 READ: bw=158MiB/s (165MB/s), 158MiB/s-158MiB/s (165MB/s-165MB/s), io=316MiB (332MB), run=2006-2006msec 00:29:41.435 WRITE: bw=92.2MiB/s (96.6MB/s), 92.2MiB/s-92.2MiB/s (96.6MB/s-96.6MB/s), io=165MiB (173MB), run=1791-1791msec 00:29:41.435 18:06:45 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.435 18:06:45 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:41.435 18:06:45 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:41.435 18:06:45 -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:41.435 18:06:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:41.435 18:06:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:41.435 18:06:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:41.435 18:06:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:41.435 18:06:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:41.435 18:06:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:41.435 18:06:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:29:41.435 18:06:45 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:29:44.730 Nvme0n1 00:29:44.730 18:06:48 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:50.012 18:06:54 -- host/fio.sh@53 -- # ls_guid=945ea4ae-d665-4b31-8358-ae0812075b71 00:29:50.012 18:06:54 -- host/fio.sh@54 -- # get_lvs_free_mb 945ea4ae-d665-4b31-8358-ae0812075b71 00:29:50.012 18:06:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=945ea4ae-d665-4b31-8358-ae0812075b71 00:29:50.012 18:06:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:50.012 18:06:54 -- common/autotest_common.sh@1345 -- # local fc 00:29:50.012 18:06:54 -- common/autotest_common.sh@1346 -- # local cs 00:29:50.012 18:06:54 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.272 18:06:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:50.272 { 00:29:50.272 "uuid": "945ea4ae-d665-4b31-8358-ae0812075b71", 00:29:50.272 "name": "lvs_0", 00:29:50.272 "base_bdev": "Nvme0n1", 00:29:50.272 "total_data_clusters": 1862, 00:29:50.272 "free_clusters": 1862, 00:29:50.272 "block_size": 512, 00:29:50.272 "cluster_size": 1073741824 00:29:50.272 } 00:29:50.272 ]' 00:29:50.272 18:06:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="945ea4ae-d665-4b31-8358-ae0812075b71") .free_clusters' 00:29:50.272 18:06:54 -- common/autotest_common.sh@1348 -- # fc=1862 00:29:50.272 18:06:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="945ea4ae-d665-4b31-8358-ae0812075b71") .cluster_size' 00:29:50.272 18:06:54 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:29:50.272 18:06:54 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:29:50.272 18:06:54 -- common/autotest_common.sh@1353 -- # echo 1906688 00:29:50.272 1906688 00:29:50.272 18:06:54 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:29:50.842 f5761eae-3311-4115-879f-145a76c2a226 00:29:50.842 18:06:54 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:51.102 18:06:55 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:51.363 18:06:55 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:51.363 18:06:55 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.363 18:06:55 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.363 18:06:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:51.363 18:06:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:51.363 18:06:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:51.363 18:06:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.363 18:06:55 -- common/autotest_common.sh@1320 -- # shift 00:29:51.363 18:06:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:51.363 18:06:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:51.363 18:06:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:51.363 18:06:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:51.363 18:06:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:51.621 18:06:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:51.621 18:06:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:51.621 18:06:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:51.621 18:06:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:51.889 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:51.889 fio-3.35 00:29:51.889 Starting 1 thread 00:29:51.889 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.429 00:29:54.429 test: (groupid=0, jobs=1): err= 0: pid=1840199: Mon Jul 22 18:06:58 2024 00:29:54.429 read: IOPS=6946, BW=27.1MiB/s (28.5MB/s)(54.4MiB/2006msec) 00:29:54.429 slat (nsec): min=1902, max=108686, avg=2060.00, stdev=1282.05 00:29:54.429 clat (usec): min=453, max=333530, avg=10075.80, stdev=22020.02 00:29:54.429 lat (usec): min=455, max=333535, avg=10077.86, stdev=22020.16 00:29:54.429 clat percentiles (msec): 00:29:54.429 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:54.429 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:54.429 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:54.429 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 334], 99.95th=[ 334], 00:29:54.429 | 99.99th=[ 334] 00:29:54.429 bw ( KiB/s): min=11176, max=33392, per=99.79%, avg=27726.00, stdev=11035.05, samples=4 00:29:54.429 iops : min= 2794, max= 8348, avg=6931.50, stdev=2758.76, samples=4 00:29:54.429 write: IOPS=6952, BW=27.2MiB/s (28.5MB/s)(54.5MiB/2006msec); 0 zone resets 00:29:54.429 slat (nsec): min=1965, max=107031, avg=2143.02, stdev=934.62 00:29:54.429 clat (usec): min=478, max=332303, avg=8267.04, stdev=21435.57 00:29:54.429 lat (usec): min=480, max=332309, avg=8269.18, stdev=21435.70 00:29:54.429 clat percentiles (msec): 00:29:54.429 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:29:54.429 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:29:54.429 | 70.00th=[ 8], 80.00th=[ 8], 90.00th=[ 8], 95.00th=[ 8], 00:29:54.429 | 99.00th=[ 9], 99.50th=[ 11], 99.90th=[ 334], 99.95th=[ 334], 00:29:54.429 | 99.99th=[ 334] 00:29:54.429 bw ( KiB/s): min=11792, max=33216, per=99.90%, avg=27780.00, stdev=10658.97, samples=4 00:29:54.429 iops : min= 2948, max= 8304, avg=6945.00, stdev=2664.74, samples=4 00:29:54.429 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:29:54.429 lat (msec) : 2=0.05%, 4=0.22%, 10=98.40%, 20=0.83%, 500=0.46% 00:29:54.429 cpu : usr=69.98%, sys=29.13%, ctx=34, majf=0, minf=41 00:29:54.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:54.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:54.429 issued rwts: total=13934,13946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:54.429 00:29:54.429 Run status group 0 (all jobs): 00:29:54.429 READ: bw=27.1MiB/s (28.5MB/s), 27.1MiB/s-27.1MiB/s (28.5MB/s-28.5MB/s), io=54.4MiB (57.1MB), run=2006-2006msec 00:29:54.429 WRITE: bw=27.2MiB/s (28.5MB/s), 27.2MiB/s-27.2MiB/s (28.5MB/s-28.5MB/s), io=54.5MiB (57.1MB), run=2006-2006msec 00:29:54.429 18:06:58 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:54.429 18:06:58 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:55.813 18:06:59 -- host/fio.sh@64 -- # ls_nested_guid=32818c9a-6ed7-46a1-b207-6c864aaafaa4 00:29:55.813 18:06:59 -- host/fio.sh@65 -- # get_lvs_free_mb 32818c9a-6ed7-46a1-b207-6c864aaafaa4 00:29:55.813 18:06:59 -- common/autotest_common.sh@1343 -- # local lvs_uuid=32818c9a-6ed7-46a1-b207-6c864aaafaa4 00:29:55.813 18:06:59 -- common/autotest_common.sh@1344 -- # local lvs_info 00:29:55.813 18:06:59 -- common/autotest_common.sh@1345 -- # local fc 00:29:55.813 18:06:59 -- common/autotest_common.sh@1346 -- # local cs 00:29:55.813 18:06:59 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:55.813 18:07:00 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:29:55.813 { 00:29:55.813 "uuid": "945ea4ae-d665-4b31-8358-ae0812075b71", 00:29:55.813 "name": "lvs_0", 00:29:55.813 "base_bdev": "Nvme0n1", 00:29:55.813 "total_data_clusters": 1862, 00:29:55.813 "free_clusters": 0, 00:29:55.813 "block_size": 512, 00:29:55.813 "cluster_size": 1073741824 00:29:55.813 }, 00:29:55.813 { 00:29:55.813 "uuid": "32818c9a-6ed7-46a1-b207-6c864aaafaa4", 00:29:55.813 "name": "lvs_n_0", 00:29:55.813 "base_bdev": "f5761eae-3311-4115-879f-145a76c2a226", 00:29:55.813 "total_data_clusters": 476206, 00:29:55.813 "free_clusters": 476206, 00:29:55.813 "block_size": 512, 00:29:55.813 "cluster_size": 4194304 00:29:55.813 } 00:29:55.813 ]' 00:29:55.813 18:07:00 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="32818c9a-6ed7-46a1-b207-6c864aaafaa4") .free_clusters' 00:29:56.072 18:07:00 -- common/autotest_common.sh@1348 -- # fc=476206 00:29:56.072 18:07:00 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="32818c9a-6ed7-46a1-b207-6c864aaafaa4") .cluster_size' 00:29:56.072 18:07:00 -- common/autotest_common.sh@1349 -- # cs=4194304 00:29:56.072 18:07:00 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:29:56.072 18:07:00 -- common/autotest_common.sh@1353 -- # echo 1904824 00:29:56.072 1904824 00:29:56.072 18:07:00 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:29:57.012 c7bf791e-47fa-41fb-bdb7-76a68908171c 00:29:57.012 18:07:01 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:57.273 18:07:01 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:57.533 18:07:01 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:57.533 18:07:01 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.533 18:07:01 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.533 18:07:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:57.533 18:07:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.533 18:07:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:57.533 18:07:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.533 18:07:01 -- common/autotest_common.sh@1320 -- # shift 00:29:57.533 18:07:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:57.533 18:07:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.533 18:07:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.533 18:07:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:57.533 18:07:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:57.533 18:07:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:57.533 18:07:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:57.533 18:07:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.799 18:07:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.799 18:07:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:57.799 18:07:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:57.800 18:07:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:57.800 18:07:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:57.800 18:07:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:57.800 18:07:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:58.061 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:58.061 fio-3.35 00:29:58.061 Starting 1 thread 00:29:58.061 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.603 00:30:00.603 test: (groupid=0, jobs=1): err= 0: pid=1841315: Mon Jul 22 18:07:04 2024 00:30:00.603 read: IOPS=9093, BW=35.5MiB/s (37.2MB/s)(71.2MiB/2005msec) 00:30:00.603 slat (nsec): min=1899, max=104782, avg=2059.03, stdev=1072.45 00:30:00.603 clat (usec): min=3480, max=12629, avg=7772.68, stdev=949.20 00:30:00.603 lat (usec): min=3484, max=12631, avg=7774.74, stdev=949.19 00:30:00.603 clat percentiles (usec): 00:30:00.603 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:30:00.603 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:30:00.603 | 70.00th=[ 7963], 80.00th=[ 8291], 90.00th=[ 9241], 95.00th=[ 9765], 00:30:00.603 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11600], 99.95th=[11731], 00:30:00.603 | 99.99th=[12649] 00:30:00.603 bw ( KiB/s): min=30728, max=38640, per=99.84%, avg=36316.00, stdev=3739.66, samples=4 00:30:00.603 iops : min= 7682, max= 9660, avg=9079.00, stdev=934.91, samples=4 00:30:00.603 write: IOPS=9104, BW=35.6MiB/s (37.3MB/s)(71.3MiB/2005msec); 0 zone resets 00:30:00.603 slat (nsec): min=1976, max=97694, avg=2150.71, stdev=765.39 00:30:00.603 clat (usec): min=1672, max=10007, avg=6215.47, stdev=800.60 00:30:00.603 lat (usec): min=1678, max=10010, avg=6217.62, stdev=800.61 00:30:00.603 clat percentiles (usec): 00:30:00.603 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:30:00.604 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:30:00.604 | 70.00th=[ 6390], 80.00th=[ 6652], 90.00th=[ 7439], 95.00th=[ 7898], 00:30:00.604 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[ 9372], 99.95th=[ 9634], 00:30:00.604 | 99.99th=[ 9896] 00:30:00.604 bw ( KiB/s): min=31760, max=38080, per=99.96%, avg=36404.00, stdev=3101.29, samples=4 00:30:00.604 iops : min= 7940, max= 9520, avg=9101.00, stdev=775.32, samples=4 00:30:00.604 lat (msec) : 2=0.01%, 4=0.06%, 10=98.00%, 20=1.93% 00:30:00.604 cpu : usr=71.11%, sys=27.79%, ctx=62, majf=0, minf=41 00:30:00.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:00.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.604 issued rwts: total=18232,18255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.604 00:30:00.604 Run status group 0 (all jobs): 00:30:00.604 READ: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=71.2MiB (74.7MB), run=2005-2005msec 00:30:00.604 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.3MiB (74.8MB), run=2005-2005msec 00:30:00.604 18:07:04 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:00.604 18:07:04 -- host/fio.sh@74 -- # sync 00:30:00.604 18:07:04 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:08.736 18:07:12 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:08.736 18:07:12 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:14.021 18:07:17 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:14.021 18:07:18 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:17.319 18:07:21 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:17.319 18:07:21 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:17.319 18:07:21 -- host/fio.sh@86 -- # nvmftestfini 00:30:17.319 18:07:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:17.319 18:07:21 -- nvmf/common.sh@116 -- # sync 00:30:17.319 18:07:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:17.319 18:07:21 -- nvmf/common.sh@119 -- # set +e 00:30:17.319 18:07:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:17.319 18:07:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:17.319 rmmod nvme_tcp 00:30:17.319 rmmod nvme_fabrics 00:30:17.319 rmmod nvme_keyring 00:30:17.319 18:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:17.319 18:07:21 -- nvmf/common.sh@123 -- # set -e 00:30:17.319 18:07:21 -- nvmf/common.sh@124 -- # return 0 00:30:17.319 18:07:21 -- nvmf/common.sh@477 -- # '[' -n 1836621 ']' 00:30:17.319 18:07:21 -- nvmf/common.sh@478 -- # killprocess 1836621 00:30:17.319 18:07:21 -- common/autotest_common.sh@926 -- # '[' -z 1836621 ']' 00:30:17.319 18:07:21 -- common/autotest_common.sh@930 -- # kill -0 1836621 00:30:17.319 18:07:21 -- common/autotest_common.sh@931 -- # uname 00:30:17.319 18:07:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:17.319 18:07:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1836621 00:30:17.319 18:07:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:17.319 18:07:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:17.319 18:07:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1836621' 00:30:17.320 killing process with pid 1836621 00:30:17.320 18:07:21 -- common/autotest_common.sh@945 -- # kill 1836621 00:30:17.320 18:07:21 -- common/autotest_common.sh@950 -- # wait 1836621 00:30:17.320 18:07:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:17.320 18:07:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:17.320 18:07:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:17.320 18:07:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:17.320 18:07:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:17.320 18:07:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.320 18:07:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:17.320 18:07:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.920 18:07:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:19.920 00:30:19.920 real 0m54.179s 00:30:19.920 user 3m40.597s 00:30:19.920 sys 0m10.661s 00:30:19.920 18:07:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.920 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 ************************************ 00:30:19.920 END TEST nvmf_fio_host 00:30:19.920 ************************************ 00:30:19.920 18:07:23 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:19.920 18:07:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:19.920 18:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:19.920 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:30:19.920 ************************************ 00:30:19.920 START TEST nvmf_failover 00:30:19.920 ************************************ 00:30:19.920 18:07:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:19.920 * Looking for test storage... 00:30:19.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:19.920 18:07:23 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.920 18:07:23 -- nvmf/common.sh@7 -- # uname -s 00:30:19.920 18:07:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.920 18:07:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.920 18:07:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.920 18:07:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.920 18:07:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.920 18:07:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.920 18:07:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.920 18:07:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.920 18:07:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.920 18:07:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.920 18:07:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:19.920 18:07:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:30:19.920 18:07:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.920 18:07:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.920 18:07:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.920 18:07:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.920 18:07:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.920 18:07:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.920 18:07:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.920 18:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.920 18:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.920 18:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.920 18:07:23 -- paths/export.sh@5 -- # export PATH 00:30:19.920 18:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.920 18:07:23 -- nvmf/common.sh@46 -- # : 0 00:30:19.920 18:07:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:19.920 18:07:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:19.920 18:07:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:19.920 18:07:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.920 18:07:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.920 18:07:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:19.920 18:07:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:19.920 18:07:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:19.920 18:07:23 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:19.920 18:07:23 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:19.920 18:07:23 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:19.920 18:07:23 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:19.920 18:07:23 -- host/failover.sh@18 -- # nvmftestinit 00:30:19.920 18:07:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:19.920 18:07:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.920 18:07:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:19.920 18:07:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:19.920 18:07:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:19.920 18:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.920 18:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:19.920 18:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.920 18:07:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:19.920 18:07:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:19.920 18:07:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:19.920 18:07:23 -- common/autotest_common.sh@10 -- # set +x 00:30:28.060 18:07:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:28.060 18:07:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:28.060 18:07:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:28.060 18:07:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:28.060 18:07:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:28.060 18:07:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:28.060 18:07:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:28.060 18:07:30 -- nvmf/common.sh@294 -- # net_devs=() 00:30:28.060 18:07:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:28.060 18:07:30 -- nvmf/common.sh@295 -- # e810=() 00:30:28.060 18:07:30 -- nvmf/common.sh@295 -- # local -ga e810 00:30:28.060 18:07:30 -- nvmf/common.sh@296 -- # x722=() 00:30:28.060 18:07:30 -- nvmf/common.sh@296 -- # local -ga x722 00:30:28.060 18:07:30 -- nvmf/common.sh@297 -- # mlx=() 00:30:28.060 18:07:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:28.060 18:07:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.060 18:07:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:28.060 18:07:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:28.060 18:07:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:28.060 18:07:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.060 18:07:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:28.060 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:28.060 18:07:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:28.060 18:07:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:28.060 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:28.060 18:07:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.060 18:07:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:28.061 18:07:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:28.061 18:07:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:28.061 18:07:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:28.061 18:07:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.061 18:07:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.061 18:07:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.061 18:07:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.061 18:07:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:28.061 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:28.061 18:07:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.061 18:07:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:28.061 18:07:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.061 18:07:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:28.061 18:07:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.061 18:07:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:28.061 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:28.061 18:07:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.061 18:07:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:28.061 18:07:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:28.061 18:07:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:28.061 18:07:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:28.061 18:07:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:28.061 18:07:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.061 18:07:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.061 18:07:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.061 18:07:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:28.061 18:07:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.061 18:07:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.061 18:07:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:28.061 18:07:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.061 18:07:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.061 18:07:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:28.061 18:07:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:28.061 18:07:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.061 18:07:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.061 18:07:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.061 18:07:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.061 18:07:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:28.061 18:07:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.061 18:07:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.061 18:07:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.061 18:07:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:28.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:30:28.061 00:30:28.061 --- 10.0.0.2 ping statistics --- 00:30:28.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.061 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:30:28.061 18:07:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:30:28.061 00:30:28.061 --- 10.0.0.1 ping statistics --- 00:30:28.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.061 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:28.061 18:07:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.061 18:07:31 -- nvmf/common.sh@410 -- # return 0 00:30:28.061 18:07:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:28.061 18:07:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.061 18:07:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:28.061 18:07:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:28.061 18:07:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.061 18:07:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:28.061 18:07:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:28.061 18:07:31 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:28.061 18:07:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:28.061 18:07:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:28.061 18:07:31 -- common/autotest_common.sh@10 -- # set +x 00:30:28.061 18:07:31 -- nvmf/common.sh@469 -- # nvmfpid=1848851 00:30:28.061 18:07:31 -- nvmf/common.sh@470 -- # waitforlisten 1848851 00:30:28.061 18:07:31 -- common/autotest_common.sh@819 -- # '[' -z 1848851 ']' 00:30:28.061 18:07:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.061 18:07:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:28.061 18:07:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.061 18:07:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:28.061 18:07:31 -- common/autotest_common.sh@10 -- # set +x 00:30:28.061 18:07:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:28.061 [2024-07-22 18:07:31.291310] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:28.061 [2024-07-22 18:07:31.291377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.061 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.061 [2024-07-22 18:07:31.365284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:28.061 [2024-07-22 18:07:31.433857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:28.061 [2024-07-22 18:07:31.433980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.061 [2024-07-22 18:07:31.433987] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.061 [2024-07-22 18:07:31.433994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.061 [2024-07-22 18:07:31.434118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.061 [2024-07-22 18:07:31.434236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.061 [2024-07-22 18:07:31.434238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.061 18:07:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:28.061 18:07:32 -- common/autotest_common.sh@852 -- # return 0 00:30:28.061 18:07:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:28.061 18:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:28.061 18:07:32 -- common/autotest_common.sh@10 -- # set +x 00:30:28.061 18:07:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.061 18:07:32 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:28.322 [2024-07-22 18:07:32.361264] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.322 18:07:32 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:28.583 Malloc0 00:30:28.583 18:07:32 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:28.583 18:07:32 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:28.844 18:07:33 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.104 [2024-07-22 18:07:33.207863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.104 18:07:33 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:29.365 [2024-07-22 18:07:33.408415] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:29.365 18:07:33 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:29.365 [2024-07-22 18:07:33.609027] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:29.625 18:07:33 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:29.625 18:07:33 -- host/failover.sh@31 -- # bdevperf_pid=1849198 00:30:29.625 18:07:33 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.625 18:07:33 -- host/failover.sh@34 -- # waitforlisten 1849198 /var/tmp/bdevperf.sock 00:30:29.625 18:07:33 -- common/autotest_common.sh@819 -- # '[' -z 1849198 ']' 00:30:29.625 18:07:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:29.625 18:07:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:29.625 18:07:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:29.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:29.625 18:07:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:29.625 18:07:33 -- common/autotest_common.sh@10 -- # set +x 00:30:30.563 18:07:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:30.563 18:07:34 -- common/autotest_common.sh@852 -- # return 0 00:30:30.563 18:07:34 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.563 NVMe0n1 00:30:30.563 18:07:34 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:30.823 00:30:30.823 18:07:35 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:30.823 18:07:35 -- host/failover.sh@39 -- # run_test_pid=1849504 00:30:30.823 18:07:35 -- host/failover.sh@41 -- # sleep 1 00:30:32.207 18:07:36 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.207 [2024-07-22 18:07:36.239066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.207 [2024-07-22 18:07:36.239328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 [2024-07-22 18:07:36.239406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd230 is same with the state(5) to be set 00:30:32.208 18:07:36 -- host/failover.sh@45 -- # sleep 3 00:30:35.506 18:07:39 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:35.506 00:30:35.506 18:07:39 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:35.506 [2024-07-22 18:07:39.765819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.506 [2024-07-22 18:07:39.765858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.765996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.507 [2024-07-22 18:07:39.766271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fda40 is same with the state(5) to be set 00:30:35.768 18:07:39 -- host/failover.sh@50 -- # sleep 3 00:30:39.066 18:07:42 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.066 [2024-07-22 18:07:42.979895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.066 18:07:43 -- host/failover.sh@55 -- # sleep 1 00:30:40.007 18:07:44 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:40.007 [2024-07-22 18:07:44.194941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.194980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.194987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.194993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 [2024-07-22 18:07:44.195174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a81b0 is same with the state(5) to be set 00:30:40.007 18:07:44 -- host/failover.sh@59 -- # wait 1849504 00:30:46.596 0 00:30:46.596 18:07:50 -- host/failover.sh@61 -- # killprocess 1849198 00:30:46.596 18:07:50 -- common/autotest_common.sh@926 -- # '[' -z 1849198 ']' 00:30:46.596 18:07:50 -- common/autotest_common.sh@930 -- # kill -0 1849198 00:30:46.596 18:07:50 -- common/autotest_common.sh@931 -- # uname 00:30:46.596 18:07:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:46.596 18:07:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1849198 00:30:46.596 18:07:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:46.596 18:07:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:46.596 18:07:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1849198' 00:30:46.596 killing process with pid 1849198 00:30:46.596 18:07:50 -- common/autotest_common.sh@945 -- # kill 1849198 00:30:46.596 18:07:50 -- common/autotest_common.sh@950 -- # wait 1849198 00:30:46.596 18:07:50 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:46.596 [2024-07-22 18:07:33.679774] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:46.596 [2024-07-22 18:07:33.679826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849198 ] 00:30:46.596 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.596 [2024-07-22 18:07:33.762284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.596 [2024-07-22 18:07:33.821430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.596 Running I/O for 15 seconds... 00:30:46.596 [2024-07-22 18:07:36.239645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.596 [2024-07-22 18:07:36.239958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.596 [2024-07-22 18:07:36.239965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.239973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.239980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.239988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.239994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.597 [2024-07-22 18:07:36.240540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.597 [2024-07-22 18:07:36.240548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.597 [2024-07-22 18:07:36.240554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.598 [2024-07-22 18:07:36.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.240987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.240996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.598 [2024-07-22 18:07:36.241121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.598 [2024-07-22 18:07:36.241129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.599 [2024-07-22 18:07:36.241470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.599 [2024-07-22 18:07:36.241589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a2360 is same with the state(5) to be set 00:30:46.599 [2024-07-22 18:07:36.241605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:46.599 [2024-07-22 18:07:36.241610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:46.599 [2024-07-22 18:07:36.241617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39416 len:8 PRP1 0x0 PRP2 0x0 00:30:46.599 [2024-07-22 18:07:36.241624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241659] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5a2360 was disconnected and freed. reset controller. 00:30:46.599 [2024-07-22 18:07:36.241674] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:46.599 [2024-07-22 18:07:36.241694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.599 [2024-07-22 18:07:36.241701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.599 [2024-07-22 18:07:36.241715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.599 [2024-07-22 18:07:36.241729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.599 [2024-07-22 18:07:36.241743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.599 [2024-07-22 18:07:36.241749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.599 [2024-07-22 18:07:36.243672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.599 [2024-07-22 18:07:36.243692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x591b30 (9): Bad file descriptor 00:30:46.600 [2024-07-22 18:07:36.272267] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:46.600 [2024-07-22 18:07:39.766484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.600 [2024-07-22 18:07:39.766992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.600 [2024-07-22 18:07:39.766999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.601 [2024-07-22 18:07:39.767212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.601 [2024-07-22 18:07:39.767227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.601 [2024-07-22 18:07:39.767257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.601 [2024-07-22 18:07:39.767347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.601 [2024-07-22 18:07:39.767525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.601 [2024-07-22 18:07:39.767532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.602 [2024-07-22 18:07:39.767893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.767992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.767998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.602 [2024-07-22 18:07:39.768098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.602 [2024-07-22 18:07:39.768105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.603 [2024-07-22 18:07:39.768330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:39.768436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59dff0 is same with the state(5) to be set 00:30:46.603 [2024-07-22 18:07:39.768453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:46.603 [2024-07-22 18:07:39.768458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:46.603 [2024-07-22 18:07:39.768465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83520 len:8 PRP1 0x0 PRP2 0x0 00:30:46.603 [2024-07-22 18:07:39.768472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768507] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x59dff0 was disconnected and freed. reset controller. 00:30:46.603 [2024-07-22 18:07:39.768516] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:46.603 [2024-07-22 18:07:39.768535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.603 [2024-07-22 18:07:39.768542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.603 [2024-07-22 18:07:39.768556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.603 [2024-07-22 18:07:39.768570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.603 [2024-07-22 18:07:39.768583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:39.768589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.603 [2024-07-22 18:07:39.768612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x591b30 (9): Bad file descriptor 00:30:46.603 [2024-07-22 18:07:39.770678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.603 [2024-07-22 18:07:39.846012] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:46.603 [2024-07-22 18:07:44.195379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.603 [2024-07-22 18:07:44.195535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.603 [2024-07-22 18:07:44.195543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.195699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.195714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.195879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.195909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.195954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.195969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.195992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.195999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.196014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.196028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.196043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.196062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.196077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.196092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.604 [2024-07-22 18:07:44.196106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.604 [2024-07-22 18:07:44.196114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.604 [2024-07-22 18:07:44.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:124888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.605 [2024-07-22 18:07:44.196407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.605 [2024-07-22 18:07:44.196446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.605 [2024-07-22 18:07:44.196453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.606 [2024-07-22 18:07:44.196808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.606 [2024-07-22 18:07:44.196823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.606 [2024-07-22 18:07:44.196832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.196987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.607 [2024-07-22 18:07:44.197106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.607 [2024-07-22 18:07:44.197150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.607 [2024-07-22 18:07:44.197181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:46.607 [2024-07-22 18:07:44.197211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:46.607 [2024-07-22 18:07:44.197315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a8100 is same with the state(5) to be set 00:30:46.607 [2024-07-22 18:07:44.197332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:46.607 [2024-07-22 18:07:44.197337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:46.607 [2024-07-22 18:07:44.197343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125256 len:8 PRP1 0x0 PRP2 0x0 00:30:46.607 [2024-07-22 18:07:44.197354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197392] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5a8100 was disconnected and freed. reset controller. 00:30:46.607 [2024-07-22 18:07:44.197401] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:46.607 [2024-07-22 18:07:44.197421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.607 [2024-07-22 18:07:44.197431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.607 [2024-07-22 18:07:44.197446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.607 [2024-07-22 18:07:44.197460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:46.607 [2024-07-22 18:07:44.197473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:46.607 [2024-07-22 18:07:44.197480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:46.608 [2024-07-22 18:07:44.199531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.608 [2024-07-22 18:07:44.199554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x591b30 (9): Bad file descriptor 00:30:46.608 [2024-07-22 18:07:44.272672] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:46.608 00:30:46.608 Latency(us) 00:30:46.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.608 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:46.608 Verification LBA range: start 0x0 length 0x4000 00:30:46.608 NVMe0n1 : 15.01 15206.50 59.40 718.37 0.00 8022.42 677.42 14014.62 00:30:46.608 =================================================================================================================== 00:30:46.608 Total : 15206.50 59.40 718.37 0.00 8022.42 677.42 14014.62 00:30:46.608 Received shutdown signal, test time was about 15.000000 seconds 00:30:46.608 00:30:46.608 Latency(us) 00:30:46.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.608 =================================================================================================================== 00:30:46.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:46.608 18:07:50 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:46.608 18:07:50 -- host/failover.sh@65 -- # count=3 00:30:46.608 18:07:50 -- host/failover.sh@67 -- # (( count != 3 )) 00:30:46.608 18:07:50 -- host/failover.sh@73 -- # bdevperf_pid=1851979 00:30:46.608 18:07:50 -- host/failover.sh@75 -- # waitforlisten 1851979 /var/tmp/bdevperf.sock 00:30:46.608 18:07:50 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:46.608 18:07:50 -- common/autotest_common.sh@819 -- # '[' -z 1851979 ']' 00:30:46.608 18:07:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:46.608 18:07:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:46.608 18:07:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:46.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:46.608 18:07:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:46.608 18:07:50 -- common/autotest_common.sh@10 -- # set +x 00:30:47.178 18:07:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:47.178 18:07:51 -- common/autotest_common.sh@852 -- # return 0 00:30:47.178 18:07:51 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:47.439 [2024-07-22 18:07:51.478380] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:47.439 18:07:51 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:47.439 [2024-07-22 18:07:51.682903] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:47.699 18:07:51 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:47.959 NVMe0n1 00:30:47.959 18:07:52 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.219 00:30:48.219 18:07:52 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:48.480 00:30:48.740 18:07:52 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:48.740 18:07:52 -- host/failover.sh@82 -- # grep -q NVMe0 00:30:48.740 18:07:52 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.000 18:07:53 -- host/failover.sh@87 -- # sleep 3 00:30:52.301 18:07:56 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:52.301 18:07:56 -- host/failover.sh@88 -- # grep -q NVMe0 00:30:52.301 18:07:56 -- host/failover.sh@90 -- # run_test_pid=1852934 00:30:52.301 18:07:56 -- host/failover.sh@92 -- # wait 1852934 00:30:52.301 18:07:56 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.242 0 00:30:53.242 18:07:57 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:53.242 [2024-07-22 18:07:50.459079] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:53.242 [2024-07-22 18:07:50.459134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851979 ] 00:30:53.242 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.242 [2024-07-22 18:07:50.541450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.242 [2024-07-22 18:07:50.601873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.242 [2024-07-22 18:07:53.139934] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:53.242 [2024-07-22 18:07:53.139977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.242 [2024-07-22 18:07:53.139987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.242 [2024-07-22 18:07:53.139996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.242 [2024-07-22 18:07:53.140003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.242 [2024-07-22 18:07:53.140010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.242 [2024-07-22 18:07:53.140017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.242 [2024-07-22 18:07:53.140024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:53.242 [2024-07-22 18:07:53.140031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:53.242 [2024-07-22 18:07:53.140037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:53.242 [2024-07-22 18:07:53.140060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:53.242 [2024-07-22 18:07:53.140073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73bb30 (9): Bad file descriptor 00:30:53.242 [2024-07-22 18:07:53.233425] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:53.242 Running I/O for 1 seconds... 00:30:53.242 00:30:53.242 Latency(us) 00:30:53.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.242 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:53.242 Verification LBA range: start 0x0 length 0x4000 00:30:53.242 NVMe0n1 : 1.01 15167.02 59.25 0.00 0.00 8402.87 983.04 9931.22 00:30:53.242 =================================================================================================================== 00:30:53.242 Total : 15167.02 59.25 0.00 0.00 8402.87 983.04 9931.22 00:30:53.242 18:07:57 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.242 18:07:57 -- host/failover.sh@95 -- # grep -q NVMe0 00:30:53.503 18:07:57 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:53.763 18:07:57 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:53.763 18:07:57 -- host/failover.sh@99 -- # grep -q NVMe0 00:30:54.024 18:07:58 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:54.284 18:07:58 -- host/failover.sh@101 -- # sleep 3 00:30:57.663 18:08:01 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.663 18:08:01 -- host/failover.sh@103 -- # grep -q NVMe0 00:30:57.663 18:08:01 -- host/failover.sh@108 -- # killprocess 1851979 00:30:57.663 18:08:01 -- common/autotest_common.sh@926 -- # '[' -z 1851979 ']' 00:30:57.663 18:08:01 -- common/autotest_common.sh@930 -- # kill -0 1851979 00:30:57.663 18:08:01 -- common/autotest_common.sh@931 -- # uname 00:30:57.663 18:08:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:57.663 18:08:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1851979 00:30:57.663 18:08:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:57.663 18:08:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:57.663 18:08:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1851979' 00:30:57.663 killing process with pid 1851979 00:30:57.663 18:08:01 -- common/autotest_common.sh@945 -- # kill 1851979 00:30:57.663 18:08:01 -- common/autotest_common.sh@950 -- # wait 1851979 00:30:57.663 18:08:01 -- host/failover.sh@110 -- # sync 00:30:57.663 18:08:01 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.663 18:08:01 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:57.663 18:08:01 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:57.663 18:08:01 -- host/failover.sh@116 -- # nvmftestfini 00:30:57.663 18:08:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:57.663 18:08:01 -- nvmf/common.sh@116 -- # sync 00:30:57.663 18:08:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:57.663 18:08:01 -- nvmf/common.sh@119 -- # set +e 00:30:57.663 18:08:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:57.663 18:08:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:57.663 rmmod nvme_tcp 00:30:57.923 rmmod nvme_fabrics 00:30:57.923 rmmod nvme_keyring 00:30:57.923 18:08:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:57.924 18:08:02 -- nvmf/common.sh@123 -- # set -e 00:30:57.924 18:08:02 -- nvmf/common.sh@124 -- # return 0 00:30:57.924 18:08:02 -- nvmf/common.sh@477 -- # '[' -n 1848851 ']' 00:30:57.924 18:08:02 -- nvmf/common.sh@478 -- # killprocess 1848851 00:30:57.924 18:08:02 -- common/autotest_common.sh@926 -- # '[' -z 1848851 ']' 00:30:57.924 18:08:02 -- common/autotest_common.sh@930 -- # kill -0 1848851 00:30:57.924 18:08:02 -- common/autotest_common.sh@931 -- # uname 00:30:57.924 18:08:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:57.924 18:08:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1848851 00:30:57.924 18:08:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:57.924 18:08:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:57.924 18:08:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1848851' 00:30:57.924 killing process with pid 1848851 00:30:57.924 18:08:02 -- common/autotest_common.sh@945 -- # kill 1848851 00:30:57.924 18:08:02 -- common/autotest_common.sh@950 -- # wait 1848851 00:30:57.924 18:08:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:57.924 18:08:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:57.924 18:08:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:57.924 18:08:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:57.924 18:08:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:57.924 18:08:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.924 18:08:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:57.924 18:08:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.474 18:08:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:00.474 00:31:00.474 real 0m40.646s 00:31:00.474 user 2m5.796s 00:31:00.474 sys 0m8.784s 00:31:00.474 18:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.474 18:08:04 -- common/autotest_common.sh@10 -- # set +x 00:31:00.474 ************************************ 00:31:00.474 END TEST nvmf_failover 00:31:00.474 ************************************ 00:31:00.474 18:08:04 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:00.474 18:08:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:00.474 18:08:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.474 18:08:04 -- common/autotest_common.sh@10 -- # set +x 00:31:00.474 ************************************ 00:31:00.474 START TEST nvmf_discovery 00:31:00.474 ************************************ 00:31:00.474 18:08:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:00.474 * Looking for test storage... 00:31:00.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:00.474 18:08:04 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.474 18:08:04 -- nvmf/common.sh@7 -- # uname -s 00:31:00.474 18:08:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.474 18:08:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.474 18:08:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.474 18:08:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.474 18:08:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.474 18:08:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.474 18:08:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.474 18:08:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.474 18:08:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.474 18:08:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.474 18:08:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:00.474 18:08:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:00.474 18:08:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.474 18:08:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.474 18:08:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.474 18:08:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.474 18:08:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.474 18:08:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.474 18:08:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.474 18:08:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.474 18:08:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.474 18:08:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.474 18:08:04 -- paths/export.sh@5 -- # export PATH 00:31:00.474 18:08:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.474 18:08:04 -- nvmf/common.sh@46 -- # : 0 00:31:00.474 18:08:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:00.474 18:08:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:00.474 18:08:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:00.474 18:08:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.474 18:08:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.474 18:08:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:00.474 18:08:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:00.474 18:08:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:00.474 18:08:04 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:00.474 18:08:04 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:00.474 18:08:04 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:00.474 18:08:04 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:00.474 18:08:04 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:00.474 18:08:04 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:00.474 18:08:04 -- host/discovery.sh@25 -- # nvmftestinit 00:31:00.474 18:08:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:00.474 18:08:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.474 18:08:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:00.474 18:08:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:00.474 18:08:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:00.474 18:08:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.474 18:08:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:00.474 18:08:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.474 18:08:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:00.474 18:08:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:00.474 18:08:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:00.474 18:08:04 -- common/autotest_common.sh@10 -- # set +x 00:31:08.618 18:08:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:08.618 18:08:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:08.618 18:08:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:08.618 18:08:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:08.618 18:08:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:08.618 18:08:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:08.618 18:08:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:08.618 18:08:12 -- nvmf/common.sh@294 -- # net_devs=() 00:31:08.618 18:08:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:08.618 18:08:12 -- nvmf/common.sh@295 -- # e810=() 00:31:08.618 18:08:12 -- nvmf/common.sh@295 -- # local -ga e810 00:31:08.618 18:08:12 -- nvmf/common.sh@296 -- # x722=() 00:31:08.618 18:08:12 -- nvmf/common.sh@296 -- # local -ga x722 00:31:08.618 18:08:12 -- nvmf/common.sh@297 -- # mlx=() 00:31:08.618 18:08:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:08.618 18:08:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.618 18:08:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:08.618 18:08:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:08.618 18:08:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:08.618 18:08:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:08.618 18:08:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.618 18:08:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:08.618 18:08:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.618 18:08:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:08.618 18:08:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:08.618 18:08:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.618 18:08:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:08.618 18:08:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.618 18:08:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.618 18:08:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.618 18:08:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:08.618 18:08:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.618 18:08:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:08.618 18:08:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.618 18:08:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.618 18:08:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.618 18:08:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:08.618 18:08:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:08.618 18:08:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:08.618 18:08:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:08.618 18:08:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.618 18:08:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.618 18:08:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.618 18:08:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:08.618 18:08:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.618 18:08:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.618 18:08:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:08.618 18:08:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.618 18:08:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.618 18:08:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:08.618 18:08:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:08.618 18:08:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.618 18:08:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.618 18:08:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.618 18:08:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.618 18:08:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:08.618 18:08:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.618 18:08:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.618 18:08:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.618 18:08:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:08.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:31:08.618 00:31:08.618 --- 10.0.0.2 ping statistics --- 00:31:08.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.618 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:31:08.618 18:08:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:31:08.618 00:31:08.618 --- 10.0.0.1 ping statistics --- 00:31:08.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.619 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:31:08.619 18:08:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.619 18:08:12 -- nvmf/common.sh@410 -- # return 0 00:31:08.619 18:08:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:08.619 18:08:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.619 18:08:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:08.619 18:08:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:08.619 18:08:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.619 18:08:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:08.619 18:08:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:08.619 18:08:12 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:08.619 18:08:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:08.619 18:08:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:08.619 18:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.619 18:08:12 -- nvmf/common.sh@469 -- # nvmfpid=1858365 00:31:08.619 18:08:12 -- nvmf/common.sh@470 -- # waitforlisten 1858365 00:31:08.619 18:08:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:08.619 18:08:12 -- common/autotest_common.sh@819 -- # '[' -z 1858365 ']' 00:31:08.619 18:08:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.619 18:08:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:08.619 18:08:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.619 18:08:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:08.619 18:08:12 -- common/autotest_common.sh@10 -- # set +x 00:31:08.878 [2024-07-22 18:08:12.937386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:08.878 [2024-07-22 18:08:12.937458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.878 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.878 [2024-07-22 18:08:13.013066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.878 [2024-07-22 18:08:13.080910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:08.878 [2024-07-22 18:08:13.081027] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.878 [2024-07-22 18:08:13.081034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.878 [2024-07-22 18:08:13.081041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.878 [2024-07-22 18:08:13.081067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.820 18:08:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:09.820 18:08:13 -- common/autotest_common.sh@852 -- # return 0 00:31:09.820 18:08:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:09.820 18:08:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 18:08:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.820 18:08:13 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:09.820 18:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 [2024-07-22 18:08:13.810477] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.820 18:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.820 18:08:13 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:09.820 18:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 [2024-07-22 18:08:13.822621] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:09.820 18:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.820 18:08:13 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:09.820 18:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 null0 00:31:09.820 18:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.820 18:08:13 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:09.820 18:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 null1 00:31:09.820 18:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.820 18:08:13 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:09.820 18:08:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 18:08:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.820 18:08:13 -- host/discovery.sh@45 -- # hostpid=1858492 00:31:09.820 18:08:13 -- host/discovery.sh@46 -- # waitforlisten 1858492 /tmp/host.sock 00:31:09.820 18:08:13 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:09.820 18:08:13 -- common/autotest_common.sh@819 -- # '[' -z 1858492 ']' 00:31:09.820 18:08:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:09.820 18:08:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:09.820 18:08:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:09.820 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:09.820 18:08:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:09.820 18:08:13 -- common/autotest_common.sh@10 -- # set +x 00:31:09.820 [2024-07-22 18:08:13.906343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:09.820 [2024-07-22 18:08:13.906418] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858492 ] 00:31:09.820 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.820 [2024-07-22 18:08:13.993653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.820 [2024-07-22 18:08:14.053454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:09.820 [2024-07-22 18:08:14.053574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.761 18:08:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:10.761 18:08:14 -- common/autotest_common.sh@852 -- # return 0 00:31:10.761 18:08:14 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:10.761 18:08:14 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@72 -- # notify_id=0 00:31:10.761 18:08:14 -- host/discovery.sh@78 -- # get_subsystem_names 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # sort 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # xargs 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:31:10.761 18:08:14 -- host/discovery.sh@79 -- # get_bdev_list 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # xargs 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # sort 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:31:10.761 18:08:14 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@82 -- # get_subsystem_names 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # xargs 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # sort 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:31:10.761 18:08:14 -- host/discovery.sh@83 -- # get_bdev_list 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # xargs 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.761 18:08:14 -- host/discovery.sh@55 -- # sort 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:10.761 18:08:14 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.761 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.761 18:08:14 -- host/discovery.sh@86 -- # get_subsystem_names 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:10.761 18:08:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.761 18:08:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:10.762 18:08:14 -- common/autotest_common.sh@10 -- # set +x 00:31:10.762 18:08:14 -- host/discovery.sh@59 -- # sort 00:31:10.762 18:08:14 -- host/discovery.sh@59 -- # xargs 00:31:10.762 18:08:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:10.762 18:08:15 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:31:10.762 18:08:15 -- host/discovery.sh@87 -- # get_bdev_list 00:31:10.762 18:08:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:10.762 18:08:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:10.762 18:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:10.762 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:10.762 18:08:15 -- host/discovery.sh@55 -- # sort 00:31:10.762 18:08:15 -- host/discovery.sh@55 -- # xargs 00:31:10.762 18:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:11.022 18:08:15 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.022 18:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.022 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.022 [2024-07-22 18:08:15.081900] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.022 18:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@92 -- # get_subsystem_names 00:31:11.022 18:08:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.022 18:08:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.022 18:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.022 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.022 18:08:15 -- host/discovery.sh@59 -- # sort 00:31:11.022 18:08:15 -- host/discovery.sh@59 -- # xargs 00:31:11.022 18:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:11.022 18:08:15 -- host/discovery.sh@93 -- # get_bdev_list 00:31:11.022 18:08:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.022 18:08:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.022 18:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.022 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.022 18:08:15 -- host/discovery.sh@55 -- # sort 00:31:11.022 18:08:15 -- host/discovery.sh@55 -- # xargs 00:31:11.022 18:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:31:11.022 18:08:15 -- host/discovery.sh@94 -- # get_notification_count 00:31:11.022 18:08:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:11.022 18:08:15 -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.022 18:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.022 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.022 18:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@74 -- # notification_count=0 00:31:11.022 18:08:15 -- host/discovery.sh@75 -- # notify_id=0 00:31:11.022 18:08:15 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:11.022 18:08:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:11.022 18:08:15 -- common/autotest_common.sh@10 -- # set +x 00:31:11.022 18:08:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:11.022 18:08:15 -- host/discovery.sh@100 -- # sleep 1 00:31:11.593 [2024-07-22 18:08:15.748214] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:11.593 [2024-07-22 18:08:15.748235] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:11.593 [2024-07-22 18:08:15.748247] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:11.593 [2024-07-22 18:08:15.837526] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:11.854 [2024-07-22 18:08:15.898724] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:11.854 [2024-07-22 18:08:15.898746] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:12.115 18:08:16 -- host/discovery.sh@101 -- # get_subsystem_names 00:31:12.115 18:08:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.115 18:08:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.115 18:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.115 18:08:16 -- host/discovery.sh@59 -- # sort 00:31:12.115 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.115 18:08:16 -- host/discovery.sh@59 -- # xargs 00:31:12.115 18:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.115 18:08:16 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.115 18:08:16 -- host/discovery.sh@102 -- # get_bdev_list 00:31:12.115 18:08:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.115 18:08:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.115 18:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.115 18:08:16 -- host/discovery.sh@55 -- # sort 00:31:12.115 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.115 18:08:16 -- host/discovery.sh@55 -- # xargs 00:31:12.115 18:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.115 18:08:16 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:12.115 18:08:16 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:31:12.115 18:08:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:12.115 18:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.115 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.115 18:08:16 -- host/discovery.sh@63 -- # xargs 00:31:12.115 18:08:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:12.115 18:08:16 -- host/discovery.sh@63 -- # sort -n 00:31:12.115 18:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.375 18:08:16 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:31:12.375 18:08:16 -- host/discovery.sh@104 -- # get_notification_count 00:31:12.376 18:08:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:12.376 18:08:16 -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.376 18:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.376 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.376 18:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.376 18:08:16 -- host/discovery.sh@74 -- # notification_count=1 00:31:12.376 18:08:16 -- host/discovery.sh@75 -- # notify_id=1 00:31:12.376 18:08:16 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:31:12.376 18:08:16 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:12.376 18:08:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.376 18:08:16 -- common/autotest_common.sh@10 -- # set +x 00:31:12.376 18:08:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.376 18:08:16 -- host/discovery.sh@109 -- # sleep 1 00:31:13.317 18:08:17 -- host/discovery.sh@110 -- # get_bdev_list 00:31:13.317 18:08:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.317 18:08:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.317 18:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.317 18:08:17 -- host/discovery.sh@55 -- # sort 00:31:13.317 18:08:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.317 18:08:17 -- host/discovery.sh@55 -- # xargs 00:31:13.317 18:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.317 18:08:17 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.317 18:08:17 -- host/discovery.sh@111 -- # get_notification_count 00:31:13.317 18:08:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:13.317 18:08:17 -- host/discovery.sh@74 -- # jq '. | length' 00:31:13.317 18:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.317 18:08:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.317 18:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.317 18:08:17 -- host/discovery.sh@74 -- # notification_count=1 00:31:13.317 18:08:17 -- host/discovery.sh@75 -- # notify_id=2 00:31:13.317 18:08:17 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:31:13.317 18:08:17 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:13.317 18:08:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.317 18:08:17 -- common/autotest_common.sh@10 -- # set +x 00:31:13.317 [2024-07-22 18:08:17.580690] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.317 [2024-07-22 18:08:17.581645] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:13.317 [2024-07-22 18:08:17.581670] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.317 18:08:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.317 18:08:17 -- host/discovery.sh@117 -- # sleep 1 00:31:13.578 [2024-07-22 18:08:17.710063] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:13.840 [2024-07-22 18:08:17.980513] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.840 [2024-07-22 18:08:17.980535] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.840 [2024-07-22 18:08:17.980540] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:14.410 18:08:18 -- host/discovery.sh@118 -- # get_subsystem_names 00:31:14.410 18:08:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.410 18:08:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.410 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.410 18:08:18 -- host/discovery.sh@59 -- # sort 00:31:14.410 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.410 18:08:18 -- host/discovery.sh@59 -- # xargs 00:31:14.410 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.410 18:08:18 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.410 18:08:18 -- host/discovery.sh@119 -- # get_bdev_list 00:31:14.410 18:08:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.410 18:08:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.410 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.410 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.410 18:08:18 -- host/discovery.sh@55 -- # sort 00:31:14.410 18:08:18 -- host/discovery.sh@55 -- # xargs 00:31:14.410 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.672 18:08:18 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:14.672 18:08:18 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:31:14.672 18:08:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:14.672 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.672 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.672 18:08:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:14.672 18:08:18 -- host/discovery.sh@63 -- # sort -n 00:31:14.672 18:08:18 -- host/discovery.sh@63 -- # xargs 00:31:14.672 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.672 18:08:18 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:14.672 18:08:18 -- host/discovery.sh@121 -- # get_notification_count 00:31:14.672 18:08:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.672 18:08:18 -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.672 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.672 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.672 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.672 18:08:18 -- host/discovery.sh@74 -- # notification_count=0 00:31:14.672 18:08:18 -- host/discovery.sh@75 -- # notify_id=2 00:31:14.673 18:08:18 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:31:14.673 18:08:18 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.673 18:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.673 18:08:18 -- common/autotest_common.sh@10 -- # set +x 00:31:14.673 [2024-07-22 18:08:18.800905] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:14.673 [2024-07-22 18:08:18.800926] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:14.673 [2024-07-22 18:08:18.803709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.673 [2024-07-22 18:08:18.803726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.673 [2024-07-22 18:08:18.803735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.673 [2024-07-22 18:08:18.803741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.673 [2024-07-22 18:08:18.803749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.673 [2024-07-22 18:08:18.803755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.673 [2024-07-22 18:08:18.803763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.673 [2024-07-22 18:08:18.803769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.673 [2024-07-22 18:08:18.803775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 18:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.673 18:08:18 -- host/discovery.sh@127 -- # sleep 1 00:31:14.673 [2024-07-22 18:08:18.813723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.823764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.673 [2024-07-22 18:08:18.824091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.824553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.824587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.673 [2024-07-22 18:08:18.824597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 [2024-07-22 18:08:18.824614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.824648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.673 [2024-07-22 18:08:18.824657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.673 [2024-07-22 18:08:18.824665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.673 [2024-07-22 18:08:18.824679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.673 [2024-07-22 18:08:18.833817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.673 [2024-07-22 18:08:18.834098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.834529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.834564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.673 [2024-07-22 18:08:18.834574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 [2024-07-22 18:08:18.834591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.834602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.673 [2024-07-22 18:08:18.834608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.673 [2024-07-22 18:08:18.834619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.673 [2024-07-22 18:08:18.834633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.673 [2024-07-22 18:08:18.843870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.673 [2024-07-22 18:08:18.844209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.844628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.844663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.673 [2024-07-22 18:08:18.844673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 [2024-07-22 18:08:18.844691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.844735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.673 [2024-07-22 18:08:18.844744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.673 [2024-07-22 18:08:18.844752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.673 [2024-07-22 18:08:18.844766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.673 [2024-07-22 18:08:18.853925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.673 [2024-07-22 18:08:18.854211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.854359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.854371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.673 [2024-07-22 18:08:18.854378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 [2024-07-22 18:08:18.854389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.854399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.673 [2024-07-22 18:08:18.854405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.673 [2024-07-22 18:08:18.854411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.673 [2024-07-22 18:08:18.854421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.673 [2024-07-22 18:08:18.863979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.673 [2024-07-22 18:08:18.864273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.864644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.864679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.673 [2024-07-22 18:08:18.864690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 [2024-07-22 18:08:18.864709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.864720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.673 [2024-07-22 18:08:18.864726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.673 [2024-07-22 18:08:18.864733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.673 [2024-07-22 18:08:18.864747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.673 [2024-07-22 18:08:18.874029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.673 [2024-07-22 18:08:18.874508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.874826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.673 [2024-07-22 18:08:18.874838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.673 [2024-07-22 18:08:18.874846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.673 [2024-07-22 18:08:18.874863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.673 [2024-07-22 18:08:18.874886] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.673 [2024-07-22 18:08:18.874894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.673 [2024-07-22 18:08:18.874901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.673 [2024-07-22 18:08:18.874914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.673 [2024-07-22 18:08:18.884085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.674 [2024-07-22 18:08:18.884445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.674 [2024-07-22 18:08:18.884661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.674 [2024-07-22 18:08:18.884669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x627940 with addr=10.0.0.2, port=4420 00:31:14.674 [2024-07-22 18:08:18.884676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627940 is same with the state(5) to be set 00:31:14.674 [2024-07-22 18:08:18.884686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x627940 (9): Bad file descriptor 00:31:14.674 [2024-07-22 18:08:18.884695] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.674 [2024-07-22 18:08:18.884701] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.674 [2024-07-22 18:08:18.884707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.674 [2024-07-22 18:08:18.884717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.674 [2024-07-22 18:08:18.888925] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:14.674 [2024-07-22 18:08:18.888944] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:15.620 18:08:19 -- host/discovery.sh@128 -- # get_subsystem_names 00:31:15.620 18:08:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.620 18:08:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.620 18:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.620 18:08:19 -- host/discovery.sh@59 -- # sort 00:31:15.620 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:15.620 18:08:19 -- host/discovery.sh@59 -- # xargs 00:31:15.620 18:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.620 18:08:19 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.621 18:08:19 -- host/discovery.sh@129 -- # get_bdev_list 00:31:15.621 18:08:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.621 18:08:19 -- host/discovery.sh@55 -- # xargs 00:31:15.621 18:08:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.621 18:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.621 18:08:19 -- host/discovery.sh@55 -- # sort 00:31:15.621 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:15.621 18:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.881 18:08:19 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:15.881 18:08:19 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:31:15.881 18:08:19 -- host/discovery.sh@63 -- # xargs 00:31:15.881 18:08:19 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:15.881 18:08:19 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.881 18:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.881 18:08:19 -- host/discovery.sh@63 -- # sort -n 00:31:15.881 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 18:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.881 18:08:19 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:31:15.881 18:08:19 -- host/discovery.sh@131 -- # get_notification_count 00:31:15.881 18:08:19 -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.881 18:08:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:15.881 18:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.881 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 18:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.881 18:08:19 -- host/discovery.sh@74 -- # notification_count=0 00:31:15.881 18:08:19 -- host/discovery.sh@75 -- # notify_id=2 00:31:15.881 18:08:19 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:31:15.881 18:08:19 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:15.881 18:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:15.881 18:08:19 -- common/autotest_common.sh@10 -- # set +x 00:31:15.881 18:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:15.881 18:08:20 -- host/discovery.sh@135 -- # sleep 1 00:31:16.821 18:08:21 -- host/discovery.sh@136 -- # get_subsystem_names 00:31:16.821 18:08:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:16.821 18:08:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:16.821 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.821 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:31:16.821 18:08:21 -- host/discovery.sh@59 -- # sort 00:31:16.821 18:08:21 -- host/discovery.sh@59 -- # xargs 00:31:16.821 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.821 18:08:21 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:31:16.821 18:08:21 -- host/discovery.sh@137 -- # get_bdev_list 00:31:16.821 18:08:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.821 18:08:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.821 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.821 18:08:21 -- host/discovery.sh@55 -- # sort 00:31:16.821 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:31:16.821 18:08:21 -- host/discovery.sh@55 -- # xargs 00:31:16.821 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.080 18:08:21 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:31:17.080 18:08:21 -- host/discovery.sh@138 -- # get_notification_count 00:31:17.080 18:08:21 -- host/discovery.sh@74 -- # jq '. | length' 00:31:17.080 18:08:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:17.080 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.080 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:31:17.080 18:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:17.080 18:08:21 -- host/discovery.sh@74 -- # notification_count=2 00:31:17.080 18:08:21 -- host/discovery.sh@75 -- # notify_id=4 00:31:17.080 18:08:21 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:31:17.080 18:08:21 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:17.080 18:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:17.080 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:31:18.020 [2024-07-22 18:08:22.175341] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:18.020 [2024-07-22 18:08:22.175362] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:18.020 [2024-07-22 18:08:22.175374] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:18.020 [2024-07-22 18:08:22.263643] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:18.280 [2024-07-22 18:08:22.369517] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:18.280 [2024-07-22 18:08:22.369551] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:18.280 18:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.280 18:08:22 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.280 18:08:22 -- common/autotest_common.sh@640 -- # local es=0 00:31:18.280 18:08:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.280 18:08:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:18.280 18:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:18.280 18:08:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:18.280 18:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:18.280 18:08:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.280 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.280 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.280 request: 00:31:18.280 { 00:31:18.280 "name": "nvme", 00:31:18.280 "trtype": "tcp", 00:31:18.280 "traddr": "10.0.0.2", 00:31:18.280 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:18.280 "adrfam": "ipv4", 00:31:18.280 "trsvcid": "8009", 00:31:18.280 "wait_for_attach": true, 00:31:18.280 "method": "bdev_nvme_start_discovery", 00:31:18.280 "req_id": 1 00:31:18.280 } 00:31:18.280 Got JSON-RPC error response 00:31:18.280 response: 00:31:18.280 { 00:31:18.280 "code": -17, 00:31:18.280 "message": "File exists" 00:31:18.280 } 00:31:18.280 18:08:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:18.281 18:08:22 -- common/autotest_common.sh@643 -- # es=1 00:31:18.281 18:08:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:18.281 18:08:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:18.281 18:08:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:18.281 18:08:22 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:18.281 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.281 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # sort 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # xargs 00:31:18.281 18:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.281 18:08:22 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:31:18.281 18:08:22 -- host/discovery.sh@147 -- # get_bdev_list 00:31:18.281 18:08:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.281 18:08:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:18.281 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.281 18:08:22 -- host/discovery.sh@55 -- # sort 00:31:18.281 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.281 18:08:22 -- host/discovery.sh@55 -- # xargs 00:31:18.281 18:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.281 18:08:22 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:18.281 18:08:22 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.281 18:08:22 -- common/autotest_common.sh@640 -- # local es=0 00:31:18.281 18:08:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.281 18:08:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:18.281 18:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:18.281 18:08:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:18.281 18:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:18.281 18:08:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:18.281 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.281 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.281 request: 00:31:18.281 { 00:31:18.281 "name": "nvme_second", 00:31:18.281 "trtype": "tcp", 00:31:18.281 "traddr": "10.0.0.2", 00:31:18.281 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:18.281 "adrfam": "ipv4", 00:31:18.281 "trsvcid": "8009", 00:31:18.281 "wait_for_attach": true, 00:31:18.281 "method": "bdev_nvme_start_discovery", 00:31:18.281 "req_id": 1 00:31:18.281 } 00:31:18.281 Got JSON-RPC error response 00:31:18.281 response: 00:31:18.281 { 00:31:18.281 "code": -17, 00:31:18.281 "message": "File exists" 00:31:18.281 } 00:31:18.281 18:08:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:18.281 18:08:22 -- common/autotest_common.sh@643 -- # es=1 00:31:18.281 18:08:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:18.281 18:08:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:18.281 18:08:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:18.281 18:08:22 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:18.281 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.281 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # sort 00:31:18.281 18:08:22 -- host/discovery.sh@67 -- # xargs 00:31:18.281 18:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.541 18:08:22 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:31:18.541 18:08:22 -- host/discovery.sh@153 -- # get_bdev_list 00:31:18.541 18:08:22 -- host/discovery.sh@55 -- # xargs 00:31:18.541 18:08:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.541 18:08:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:18.541 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.541 18:08:22 -- host/discovery.sh@55 -- # sort 00:31:18.541 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:18.542 18:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.542 18:08:22 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:18.542 18:08:22 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:18.542 18:08:22 -- common/autotest_common.sh@640 -- # local es=0 00:31:18.542 18:08:22 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:18.542 18:08:22 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:31:18.542 18:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:18.542 18:08:22 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:31:18.542 18:08:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:31:18.542 18:08:22 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:18.542 18:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.542 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:31:19.484 [2024-07-22 18:08:23.630285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.484 [2024-07-22 18:08:23.630605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.484 [2024-07-22 18:08:23.630617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x634f20 with addr=10.0.0.2, port=8010 00:31:19.484 [2024-07-22 18:08:23.630629] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:19.484 [2024-07-22 18:08:23.630637] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:19.484 [2024-07-22 18:08:23.630644] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:20.424 [2024-07-22 18:08:24.632604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.424 [2024-07-22 18:08:24.632913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.424 [2024-07-22 18:08:24.632923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x634f20 with addr=10.0.0.2, port=8010 00:31:20.424 [2024-07-22 18:08:24.632934] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:20.424 [2024-07-22 18:08:24.632941] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:20.424 [2024-07-22 18:08:24.632947] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:21.365 [2024-07-22 18:08:25.634639] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:21.365 request: 00:31:21.365 { 00:31:21.365 "name": "nvme_second", 00:31:21.365 "trtype": "tcp", 00:31:21.365 "traddr": "10.0.0.2", 00:31:21.365 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:21.365 "adrfam": "ipv4", 00:31:21.365 "trsvcid": "8010", 00:31:21.365 "attach_timeout_ms": 3000, 00:31:21.365 "method": "bdev_nvme_start_discovery", 00:31:21.365 "req_id": 1 00:31:21.365 } 00:31:21.365 Got JSON-RPC error response 00:31:21.365 response: 00:31:21.365 { 00:31:21.365 "code": -110, 00:31:21.365 "message": "Connection timed out" 00:31:21.365 } 00:31:21.365 18:08:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:31:21.365 18:08:25 -- common/autotest_common.sh@643 -- # es=1 00:31:21.365 18:08:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:31:21.365 18:08:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:31:21.365 18:08:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:31:21.626 18:08:25 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:31:21.626 18:08:25 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:21.626 18:08:25 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:21.626 18:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:21.626 18:08:25 -- host/discovery.sh@67 -- # sort 00:31:21.626 18:08:25 -- host/discovery.sh@67 -- # xargs 00:31:21.626 18:08:25 -- common/autotest_common.sh@10 -- # set +x 00:31:21.626 18:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:21.626 18:08:25 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:31:21.626 18:08:25 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:31:21.626 18:08:25 -- host/discovery.sh@162 -- # kill 1858492 00:31:21.626 18:08:25 -- host/discovery.sh@163 -- # nvmftestfini 00:31:21.626 18:08:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:21.626 18:08:25 -- nvmf/common.sh@116 -- # sync 00:31:21.626 18:08:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:21.626 18:08:25 -- nvmf/common.sh@119 -- # set +e 00:31:21.626 18:08:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:21.626 18:08:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:21.626 rmmod nvme_tcp 00:31:21.626 rmmod nvme_fabrics 00:31:21.626 rmmod nvme_keyring 00:31:21.626 18:08:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:21.626 18:08:25 -- nvmf/common.sh@123 -- # set -e 00:31:21.626 18:08:25 -- nvmf/common.sh@124 -- # return 0 00:31:21.626 18:08:25 -- nvmf/common.sh@477 -- # '[' -n 1858365 ']' 00:31:21.626 18:08:25 -- nvmf/common.sh@478 -- # killprocess 1858365 00:31:21.626 18:08:25 -- common/autotest_common.sh@926 -- # '[' -z 1858365 ']' 00:31:21.626 18:08:25 -- common/autotest_common.sh@930 -- # kill -0 1858365 00:31:21.626 18:08:25 -- common/autotest_common.sh@931 -- # uname 00:31:21.626 18:08:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:21.626 18:08:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1858365 00:31:21.626 18:08:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:21.626 18:08:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:21.626 18:08:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1858365' 00:31:21.626 killing process with pid 1858365 00:31:21.626 18:08:25 -- common/autotest_common.sh@945 -- # kill 1858365 00:31:21.626 18:08:25 -- common/autotest_common.sh@950 -- # wait 1858365 00:31:21.886 18:08:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:21.886 18:08:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:21.886 18:08:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:21.886 18:08:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.886 18:08:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:21.886 18:08:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.886 18:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.886 18:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.798 18:08:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:23.798 00:31:23.798 real 0m23.695s 00:31:23.798 user 0m28.816s 00:31:23.798 sys 0m7.676s 00:31:23.798 18:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:23.798 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:23.798 ************************************ 00:31:23.798 END TEST nvmf_discovery 00:31:23.798 ************************************ 00:31:23.798 18:08:28 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:23.798 18:08:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:23.798 18:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:23.798 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:23.798 ************************************ 00:31:23.798 START TEST nvmf_discovery_remove_ifc 00:31:23.798 ************************************ 00:31:23.798 18:08:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:24.060 * Looking for test storage... 00:31:24.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.060 18:08:28 -- nvmf/common.sh@7 -- # uname -s 00:31:24.060 18:08:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.060 18:08:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.060 18:08:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.060 18:08:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.060 18:08:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.060 18:08:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.060 18:08:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.060 18:08:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.060 18:08:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.060 18:08:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.060 18:08:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:24.060 18:08:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:24.060 18:08:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.060 18:08:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.060 18:08:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.060 18:08:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.060 18:08:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.060 18:08:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.060 18:08:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.060 18:08:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.060 18:08:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.060 18:08:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.060 18:08:28 -- paths/export.sh@5 -- # export PATH 00:31:24.060 18:08:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.060 18:08:28 -- nvmf/common.sh@46 -- # : 0 00:31:24.060 18:08:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:24.060 18:08:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:24.060 18:08:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:24.060 18:08:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.060 18:08:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.060 18:08:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:24.060 18:08:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:24.060 18:08:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:24.060 18:08:28 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:24.060 18:08:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:24.060 18:08:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.060 18:08:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:24.060 18:08:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:24.060 18:08:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:24.060 18:08:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.060 18:08:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:24.060 18:08:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.060 18:08:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:24.060 18:08:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:24.060 18:08:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:24.060 18:08:28 -- common/autotest_common.sh@10 -- # set +x 00:31:32.202 18:08:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:32.202 18:08:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:32.202 18:08:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:32.202 18:08:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:32.202 18:08:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:32.202 18:08:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:32.202 18:08:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:32.202 18:08:35 -- nvmf/common.sh@294 -- # net_devs=() 00:31:32.202 18:08:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:32.202 18:08:35 -- nvmf/common.sh@295 -- # e810=() 00:31:32.202 18:08:35 -- nvmf/common.sh@295 -- # local -ga e810 00:31:32.202 18:08:35 -- nvmf/common.sh@296 -- # x722=() 00:31:32.202 18:08:35 -- nvmf/common.sh@296 -- # local -ga x722 00:31:32.202 18:08:35 -- nvmf/common.sh@297 -- # mlx=() 00:31:32.202 18:08:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:32.202 18:08:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:32.202 18:08:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:32.202 18:08:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:32.202 18:08:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:32.202 18:08:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:32.202 18:08:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:32.202 18:08:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:32.202 18:08:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:32.202 18:08:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:32.202 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:32.202 18:08:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:32.202 18:08:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:32.202 18:08:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.202 18:08:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:32.203 18:08:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:32.203 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:32.203 18:08:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:32.203 18:08:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:32.203 18:08:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.203 18:08:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:32.203 18:08:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.203 18:08:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:32.203 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:32.203 18:08:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.203 18:08:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:32.203 18:08:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:32.203 18:08:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:32.203 18:08:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:32.203 18:08:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:32.203 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:32.203 18:08:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:32.203 18:08:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:32.203 18:08:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:32.203 18:08:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:32.203 18:08:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:32.203 18:08:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:32.203 18:08:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:32.203 18:08:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:32.203 18:08:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:32.203 18:08:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:32.203 18:08:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:32.203 18:08:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:32.203 18:08:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:32.203 18:08:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:32.203 18:08:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:32.203 18:08:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:32.203 18:08:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:32.203 18:08:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:32.203 18:08:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:32.203 18:08:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:32.203 18:08:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:32.203 18:08:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:32.203 18:08:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:32.203 18:08:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:32.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:32.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:31:32.203 00:31:32.203 --- 10.0.0.2 ping statistics --- 00:31:32.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.203 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:31:32.203 18:08:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:32.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:32.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:31:32.203 00:31:32.203 --- 10.0.0.1 ping statistics --- 00:31:32.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:32.203 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:31:32.203 18:08:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:32.203 18:08:35 -- nvmf/common.sh@410 -- # return 0 00:31:32.203 18:08:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:32.203 18:08:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:32.203 18:08:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:32.203 18:08:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:32.203 18:08:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:32.203 18:08:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:32.203 18:08:35 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:32.203 18:08:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:32.203 18:08:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:32.203 18:08:35 -- common/autotest_common.sh@10 -- # set +x 00:31:32.203 18:08:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:32.203 18:08:35 -- nvmf/common.sh@469 -- # nvmfpid=1864918 00:31:32.203 18:08:35 -- nvmf/common.sh@470 -- # waitforlisten 1864918 00:31:32.203 18:08:35 -- common/autotest_common.sh@819 -- # '[' -z 1864918 ']' 00:31:32.203 18:08:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:32.203 18:08:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:32.203 18:08:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:32.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:32.203 18:08:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:32.203 18:08:35 -- common/autotest_common.sh@10 -- # set +x 00:31:32.203 [2024-07-22 18:08:35.861309] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:32.203 [2024-07-22 18:08:35.861373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:32.203 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.203 [2024-07-22 18:08:35.931615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.203 [2024-07-22 18:08:35.991300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:32.203 [2024-07-22 18:08:35.991424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:32.203 [2024-07-22 18:08:35.991432] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:32.203 [2024-07-22 18:08:35.991439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:32.203 [2024-07-22 18:08:35.991455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:32.464 18:08:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:32.464 18:08:36 -- common/autotest_common.sh@852 -- # return 0 00:31:32.464 18:08:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:32.464 18:08:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:32.464 18:08:36 -- common/autotest_common.sh@10 -- # set +x 00:31:32.464 18:08:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.464 18:08:36 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:32.464 18:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:32.464 18:08:36 -- common/autotest_common.sh@10 -- # set +x 00:31:32.464 [2024-07-22 18:08:36.711746] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.464 [2024-07-22 18:08:36.719874] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:32.464 null0 00:31:32.724 [2024-07-22 18:08:36.751893] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.724 18:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:32.724 18:08:36 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1865234 00:31:32.724 18:08:36 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1865234 /tmp/host.sock 00:31:32.724 18:08:36 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:32.724 18:08:36 -- common/autotest_common.sh@819 -- # '[' -z 1865234 ']' 00:31:32.724 18:08:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:31:32.724 18:08:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:32.724 18:08:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:32.724 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:32.724 18:08:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:32.724 18:08:36 -- common/autotest_common.sh@10 -- # set +x 00:31:32.724 [2024-07-22 18:08:36.819562] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:32.724 [2024-07-22 18:08:36.819604] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865234 ] 00:31:32.724 EAL: No free 2048 kB hugepages reported on node 1 00:31:32.724 [2024-07-22 18:08:36.899569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.724 [2024-07-22 18:08:36.958818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:32.724 [2024-07-22 18:08:36.958938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.663 18:08:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:33.663 18:08:37 -- common/autotest_common.sh@852 -- # return 0 00:31:33.663 18:08:37 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.663 18:08:37 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:33.663 18:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.663 18:08:37 -- common/autotest_common.sh@10 -- # set +x 00:31:33.663 18:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.663 18:08:37 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:33.663 18:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.663 18:08:37 -- common/autotest_common.sh@10 -- # set +x 00:31:33.663 18:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.663 18:08:37 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:33.663 18:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.663 18:08:37 -- common/autotest_common.sh@10 -- # set +x 00:31:34.602 [2024-07-22 18:08:38.792428] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:34.602 [2024-07-22 18:08:38.792450] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:34.602 [2024-07-22 18:08:38.792463] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:34.866 [2024-07-22 18:08:38.920866] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:34.866 [2024-07-22 18:08:39.105556] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:34.866 [2024-07-22 18:08:39.105596] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:34.866 [2024-07-22 18:08:39.105616] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:34.866 [2024-07-22 18:08:39.105629] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:34.866 [2024-07-22 18:08:39.105648] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:34.866 18:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:34.866 18:08:39 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:34.866 [2024-07-22 18:08:39.109518] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20eef20 was disconnected and freed. delete nvme_qpair. 00:31:34.866 18:08:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.866 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.866 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.866 18:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:34.866 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.866 18:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:34.866 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.866 18:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:35.180 18:08:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:35.180 18:08:39 -- common/autotest_common.sh@10 -- # set +x 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:35.180 18:08:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:35.180 18:08:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:36.121 18:08:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.121 18:08:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.121 18:08:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.121 18:08:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.121 18:08:40 -- common/autotest_common.sh@10 -- # set +x 00:31:36.121 18:08:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.121 18:08:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.121 18:08:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.381 18:08:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:36.381 18:08:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:37.322 18:08:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:37.322 18:08:41 -- common/autotest_common.sh@10 -- # set +x 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:37.322 18:08:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:37.322 18:08:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:38.259 18:08:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:38.259 18:08:42 -- common/autotest_common.sh@10 -- # set +x 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:38.259 18:08:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:38.259 18:08:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:39.643 18:08:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:39.643 18:08:43 -- common/autotest_common.sh@10 -- # set +x 00:31:39.643 18:08:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:39.643 18:08:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:40.584 [2024-07-22 18:08:44.546140] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:40.584 [2024-07-22 18:08:44.546183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.584 [2024-07-22 18:08:44.546194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.584 [2024-07-22 18:08:44.546203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.584 [2024-07-22 18:08:44.546210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.584 [2024-07-22 18:08:44.546218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.584 [2024-07-22 18:08:44.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.584 [2024-07-22 18:08:44.546231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.584 [2024-07-22 18:08:44.546238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.584 [2024-07-22 18:08:44.546245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:40.584 [2024-07-22 18:08:44.546252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:40.584 [2024-07-22 18:08:44.546258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b55d0 is same with the state(5) to be set 00:31:40.584 [2024-07-22 18:08:44.556161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b55d0 (9): Bad file descriptor 00:31:40.584 [2024-07-22 18:08:44.566202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:40.584 18:08:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:40.584 18:08:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:40.584 18:08:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:40.584 18:08:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:40.584 18:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:40.584 18:08:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:40.584 18:08:44 -- common/autotest_common.sh@10 -- # set +x 00:31:41.524 [2024-07-22 18:08:45.619426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:42.464 [2024-07-22 18:08:46.643430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:42.464 [2024-07-22 18:08:46.643516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b55d0 with addr=10.0.0.2, port=4420 00:31:42.464 [2024-07-22 18:08:46.643546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b55d0 is same with the state(5) to be set 00:31:42.464 [2024-07-22 18:08:46.643594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:42.464 [2024-07-22 18:08:46.643615] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:42.464 [2024-07-22 18:08:46.643634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:42.464 [2024-07-22 18:08:46.643655] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:42.464 [2024-07-22 18:08:46.644608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b55d0 (9): Bad file descriptor 00:31:42.464 [2024-07-22 18:08:46.644668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:42.464 [2024-07-22 18:08:46.644717] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:42.464 [2024-07-22 18:08:46.644769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.464 [2024-07-22 18:08:46.644806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.464 [2024-07-22 18:08:46.644833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.464 [2024-07-22 18:08:46.644854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.464 [2024-07-22 18:08:46.644878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.464 [2024-07-22 18:08:46.644899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.464 [2024-07-22 18:08:46.644922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.464 [2024-07-22 18:08:46.644943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.464 [2024-07-22 18:08:46.644967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:42.464 [2024-07-22 18:08:46.644987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.464 [2024-07-22 18:08:46.645009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:42.464 [2024-07-22 18:08:46.645038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b59e0 (9): Bad file descriptor 00:31:42.464 [2024-07-22 18:08:46.645701] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:42.464 [2024-07-22 18:08:46.645733] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:42.464 18:08:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.464 18:08:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:42.464 18:08:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:43.405 18:08:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:43.405 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:43.405 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.405 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:43.405 18:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.405 18:08:47 -- common/autotest_common.sh@10 -- # set +x 00:31:43.405 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:43.665 18:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:43.665 18:08:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:43.665 18:08:47 -- common/autotest_common.sh@10 -- # set +x 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:43.665 18:08:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:43.665 18:08:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.605 [2024-07-22 18:08:48.697212] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:44.605 [2024-07-22 18:08:48.697231] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:44.605 [2024-07-22 18:08:48.697243] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:44.605 [2024-07-22 18:08:48.826656] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:44.866 18:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:44.866 18:08:48 -- common/autotest_common.sh@10 -- # set +x 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:44.866 18:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:44.866 18:08:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.866 [2024-07-22 18:08:49.050691] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:44.866 [2024-07-22 18:08:49.050725] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:44.866 [2024-07-22 18:08:49.050743] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:44.866 [2024-07-22 18:08:49.050756] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:44.866 [2024-07-22 18:08:49.050763] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:44.866 [2024-07-22 18:08:49.054245] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20f83d0 was disconnected and freed. delete nvme_qpair. 00:31:45.806 18:08:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.806 18:08:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.806 18:08:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.806 18:08:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.806 18:08:49 -- common/autotest_common.sh@10 -- # set +x 00:31:45.807 18:08:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.807 18:08:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.807 18:08:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.807 18:08:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:45.807 18:08:49 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:45.807 18:08:49 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1865234 00:31:45.807 18:08:49 -- common/autotest_common.sh@926 -- # '[' -z 1865234 ']' 00:31:45.807 18:08:49 -- common/autotest_common.sh@930 -- # kill -0 1865234 00:31:45.807 18:08:49 -- common/autotest_common.sh@931 -- # uname 00:31:45.807 18:08:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:45.807 18:08:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1865234 00:31:45.807 18:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:45.807 18:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:45.807 18:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1865234' 00:31:45.807 killing process with pid 1865234 00:31:45.807 18:08:50 -- common/autotest_common.sh@945 -- # kill 1865234 00:31:45.807 18:08:50 -- common/autotest_common.sh@950 -- # wait 1865234 00:31:46.067 18:08:50 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:46.067 18:08:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:46.067 18:08:50 -- nvmf/common.sh@116 -- # sync 00:31:46.067 18:08:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:46.067 18:08:50 -- nvmf/common.sh@119 -- # set +e 00:31:46.067 18:08:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:46.067 18:08:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:46.067 rmmod nvme_tcp 00:31:46.067 rmmod nvme_fabrics 00:31:46.067 rmmod nvme_keyring 00:31:46.067 18:08:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:46.067 18:08:50 -- nvmf/common.sh@123 -- # set -e 00:31:46.067 18:08:50 -- nvmf/common.sh@124 -- # return 0 00:31:46.067 18:08:50 -- nvmf/common.sh@477 -- # '[' -n 1864918 ']' 00:31:46.067 18:08:50 -- nvmf/common.sh@478 -- # killprocess 1864918 00:31:46.067 18:08:50 -- common/autotest_common.sh@926 -- # '[' -z 1864918 ']' 00:31:46.067 18:08:50 -- common/autotest_common.sh@930 -- # kill -0 1864918 00:31:46.067 18:08:50 -- common/autotest_common.sh@931 -- # uname 00:31:46.067 18:08:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:46.067 18:08:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1864918 00:31:46.067 18:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:46.067 18:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:46.067 18:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1864918' 00:31:46.067 killing process with pid 1864918 00:31:46.067 18:08:50 -- common/autotest_common.sh@945 -- # kill 1864918 00:31:46.067 18:08:50 -- common/autotest_common.sh@950 -- # wait 1864918 00:31:46.328 18:08:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:46.328 18:08:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:46.328 18:08:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:46.328 18:08:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:46.328 18:08:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:46.328 18:08:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.328 18:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:46.328 18:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.241 18:08:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:48.241 00:31:48.241 real 0m24.422s 00:31:48.241 user 0m28.515s 00:31:48.241 sys 0m6.904s 00:31:48.241 18:08:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.241 18:08:52 -- common/autotest_common.sh@10 -- # set +x 00:31:48.241 ************************************ 00:31:48.241 END TEST nvmf_discovery_remove_ifc 00:31:48.241 ************************************ 00:31:48.503 18:08:52 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:31:48.503 18:08:52 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:48.503 18:08:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:48.503 18:08:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:48.503 18:08:52 -- common/autotest_common.sh@10 -- # set +x 00:31:48.503 ************************************ 00:31:48.503 START TEST nvmf_digest 00:31:48.503 ************************************ 00:31:48.503 18:08:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:31:48.503 * Looking for test storage... 00:31:48.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:48.503 18:08:52 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.503 18:08:52 -- nvmf/common.sh@7 -- # uname -s 00:31:48.503 18:08:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.503 18:08:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.503 18:08:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.503 18:08:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.503 18:08:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.503 18:08:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.503 18:08:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.503 18:08:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.503 18:08:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.503 18:08:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.503 18:08:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:48.503 18:08:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:31:48.503 18:08:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.503 18:08:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.503 18:08:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.503 18:08:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.503 18:08:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.503 18:08:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.503 18:08:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.503 18:08:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.503 18:08:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.503 18:08:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.503 18:08:52 -- paths/export.sh@5 -- # export PATH 00:31:48.503 18:08:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.503 18:08:52 -- nvmf/common.sh@46 -- # : 0 00:31:48.503 18:08:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:48.503 18:08:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:48.503 18:08:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:48.503 18:08:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.503 18:08:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.503 18:08:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:48.503 18:08:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:48.503 18:08:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:48.503 18:08:52 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:48.503 18:08:52 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:31:48.503 18:08:52 -- host/digest.sh@16 -- # runtime=2 00:31:48.503 18:08:52 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:31:48.503 18:08:52 -- host/digest.sh@132 -- # nvmftestinit 00:31:48.503 18:08:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:48.504 18:08:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.504 18:08:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:48.504 18:08:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:48.504 18:08:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:48.504 18:08:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.504 18:08:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:48.504 18:08:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.504 18:08:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:48.504 18:08:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:48.504 18:08:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:48.504 18:08:52 -- common/autotest_common.sh@10 -- # set +x 00:31:56.651 18:09:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:56.651 18:09:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:56.651 18:09:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:56.651 18:09:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:56.651 18:09:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:56.651 18:09:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:56.651 18:09:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:56.651 18:09:00 -- nvmf/common.sh@294 -- # net_devs=() 00:31:56.651 18:09:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:56.651 18:09:00 -- nvmf/common.sh@295 -- # e810=() 00:31:56.651 18:09:00 -- nvmf/common.sh@295 -- # local -ga e810 00:31:56.651 18:09:00 -- nvmf/common.sh@296 -- # x722=() 00:31:56.651 18:09:00 -- nvmf/common.sh@296 -- # local -ga x722 00:31:56.651 18:09:00 -- nvmf/common.sh@297 -- # mlx=() 00:31:56.651 18:09:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:56.651 18:09:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.651 18:09:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:56.651 18:09:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:56.651 18:09:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:56.651 18:09:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:56.651 18:09:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:56.651 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:56.651 18:09:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:56.651 18:09:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:56.651 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:56.651 18:09:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:56.651 18:09:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:56.651 18:09:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.651 18:09:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:56.651 18:09:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.651 18:09:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:56.651 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:56.651 18:09:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.651 18:09:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:56.651 18:09:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.651 18:09:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:56.651 18:09:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.651 18:09:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:56.651 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:56.651 18:09:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.651 18:09:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:56.651 18:09:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:56.651 18:09:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:56.651 18:09:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:56.651 18:09:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.651 18:09:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.651 18:09:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.651 18:09:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:56.651 18:09:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.651 18:09:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.651 18:09:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:56.651 18:09:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.651 18:09:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.651 18:09:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:56.651 18:09:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:56.651 18:09:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.651 18:09:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.651 18:09:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.651 18:09:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.651 18:09:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:56.651 18:09:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.912 18:09:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.912 18:09:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.913 18:09:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:56.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:31:56.913 00:31:56.913 --- 10.0.0.2 ping statistics --- 00:31:56.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.913 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:31:56.913 18:09:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:31:56.913 00:31:56.913 --- 10.0.0.1 ping statistics --- 00:31:56.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.913 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:31:56.913 18:09:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.913 18:09:00 -- nvmf/common.sh@410 -- # return 0 00:31:56.913 18:09:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:56.913 18:09:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.913 18:09:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:56.913 18:09:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:56.913 18:09:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.913 18:09:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:56.913 18:09:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:56.913 18:09:01 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:56.913 18:09:01 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:31:56.913 18:09:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:56.913 18:09:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:56.913 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:31:56.913 ************************************ 00:31:56.913 START TEST nvmf_digest_clean 00:31:56.913 ************************************ 00:31:56.913 18:09:01 -- common/autotest_common.sh@1104 -- # run_digest 00:31:56.913 18:09:01 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:31:56.913 18:09:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:56.913 18:09:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:56.913 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:31:56.913 18:09:01 -- nvmf/common.sh@469 -- # nvmfpid=1872003 00:31:56.913 18:09:01 -- nvmf/common.sh@470 -- # waitforlisten 1872003 00:31:56.913 18:09:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:31:56.913 18:09:01 -- common/autotest_common.sh@819 -- # '[' -z 1872003 ']' 00:31:56.913 18:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.913 18:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:56.913 18:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.913 18:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:56.913 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:31:56.913 [2024-07-22 18:09:01.089480] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:56.913 [2024-07-22 18:09:01.089537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.913 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.913 [2024-07-22 18:09:01.179018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.174 [2024-07-22 18:09:01.269087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:57.174 [2024-07-22 18:09:01.269238] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.174 [2024-07-22 18:09:01.269246] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.174 [2024-07-22 18:09:01.269253] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.174 [2024-07-22 18:09:01.269277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.747 18:09:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:57.747 18:09:01 -- common/autotest_common.sh@852 -- # return 0 00:31:57.747 18:09:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:57.747 18:09:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:57.747 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:31:57.747 18:09:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.747 18:09:01 -- host/digest.sh@120 -- # common_target_config 00:31:57.747 18:09:01 -- host/digest.sh@43 -- # rpc_cmd 00:31:57.747 18:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.747 18:09:01 -- common/autotest_common.sh@10 -- # set +x 00:31:58.008 null0 00:31:58.008 [2024-07-22 18:09:02.072574] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.008 [2024-07-22 18:09:02.096805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.008 18:09:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:58.008 18:09:02 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:31:58.008 18:09:02 -- host/digest.sh@77 -- # local rw bs qd 00:31:58.008 18:09:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:31:58.008 18:09:02 -- host/digest.sh@80 -- # rw=randread 00:31:58.008 18:09:02 -- host/digest.sh@80 -- # bs=4096 00:31:58.008 18:09:02 -- host/digest.sh@80 -- # qd=128 00:31:58.008 18:09:02 -- host/digest.sh@82 -- # bperfpid=1872125 00:31:58.008 18:09:02 -- host/digest.sh@83 -- # waitforlisten 1872125 /var/tmp/bperf.sock 00:31:58.008 18:09:02 -- common/autotest_common.sh@819 -- # '[' -z 1872125 ']' 00:31:58.008 18:09:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:58.008 18:09:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:58.008 18:09:02 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:31:58.008 18:09:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:58.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:58.008 18:09:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:58.008 18:09:02 -- common/autotest_common.sh@10 -- # set +x 00:31:58.008 [2024-07-22 18:09:02.157432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:58.008 [2024-07-22 18:09:02.157500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872125 ] 00:31:58.008 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.008 [2024-07-22 18:09:02.224710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.269 [2024-07-22 18:09:02.293969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.850 18:09:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:58.850 18:09:03 -- common/autotest_common.sh@852 -- # return 0 00:31:58.850 18:09:03 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:31:58.850 18:09:03 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:31:58.850 18:09:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:59.113 18:09:03 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.113 18:09:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:59.373 nvme0n1 00:31:59.373 18:09:03 -- host/digest.sh@91 -- # bperf_py perform_tests 00:31:59.373 18:09:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:59.373 Running I/O for 2 seconds... 00:32:01.917 00:32:01.917 Latency(us) 00:32:01.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.917 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:01.917 nvme0n1 : 2.01 17188.28 67.14 0.00 0.00 7438.46 2432.39 14115.45 00:32:01.917 =================================================================================================================== 00:32:01.917 Total : 17188.28 67.14 0.00 0.00 7438.46 2432.39 14115.45 00:32:01.917 0 00:32:01.917 18:09:05 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:01.917 18:09:05 -- host/digest.sh@92 -- # get_accel_stats 00:32:01.917 18:09:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:01.917 18:09:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:01.917 | select(.opcode=="crc32c") 00:32:01.917 | "\(.module_name) \(.executed)"' 00:32:01.917 18:09:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:01.917 18:09:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:01.917 18:09:05 -- host/digest.sh@93 -- # exp_module=software 00:32:01.917 18:09:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:01.917 18:09:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:01.917 18:09:05 -- host/digest.sh@97 -- # killprocess 1872125 00:32:01.917 18:09:05 -- common/autotest_common.sh@926 -- # '[' -z 1872125 ']' 00:32:01.917 18:09:05 -- common/autotest_common.sh@930 -- # kill -0 1872125 00:32:01.917 18:09:05 -- common/autotest_common.sh@931 -- # uname 00:32:01.917 18:09:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:01.917 18:09:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1872125 00:32:01.917 18:09:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:01.917 18:09:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:01.917 18:09:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1872125' 00:32:01.917 killing process with pid 1872125 00:32:01.917 18:09:05 -- common/autotest_common.sh@945 -- # kill 1872125 00:32:01.917 Received shutdown signal, test time was about 2.000000 seconds 00:32:01.917 00:32:01.917 Latency(us) 00:32:01.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.917 =================================================================================================================== 00:32:01.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:01.917 18:09:05 -- common/autotest_common.sh@950 -- # wait 1872125 00:32:01.917 18:09:06 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:32:01.917 18:09:06 -- host/digest.sh@77 -- # local rw bs qd 00:32:01.917 18:09:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:01.917 18:09:06 -- host/digest.sh@80 -- # rw=randread 00:32:01.917 18:09:06 -- host/digest.sh@80 -- # bs=131072 00:32:01.917 18:09:06 -- host/digest.sh@80 -- # qd=16 00:32:01.917 18:09:06 -- host/digest.sh@82 -- # bperfpid=1872776 00:32:01.917 18:09:06 -- host/digest.sh@83 -- # waitforlisten 1872776 /var/tmp/bperf.sock 00:32:01.917 18:09:06 -- common/autotest_common.sh@819 -- # '[' -z 1872776 ']' 00:32:01.917 18:09:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:01.917 18:09:06 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:01.917 18:09:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:01.917 18:09:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:01.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:01.917 18:09:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:01.917 18:09:06 -- common/autotest_common.sh@10 -- # set +x 00:32:01.917 [2024-07-22 18:09:06.057900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:01.917 [2024-07-22 18:09:06.057957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872776 ] 00:32:01.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:01.917 Zero copy mechanism will not be used. 00:32:01.917 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.917 [2024-07-22 18:09:06.118201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.917 [2024-07-22 18:09:06.177121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:02.179 18:09:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:02.179 18:09:06 -- common/autotest_common.sh@852 -- # return 0 00:32:02.179 18:09:06 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:02.179 18:09:06 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:02.179 18:09:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:02.179 18:09:06 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.179 18:09:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:02.751 nvme0n1 00:32:02.751 18:09:06 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:02.751 18:09:06 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:02.751 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:02.751 Zero copy mechanism will not be used. 00:32:02.751 Running I/O for 2 seconds... 00:32:05.297 00:32:05.297 Latency(us) 00:32:05.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.297 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:05.297 nvme0n1 : 2.00 3581.16 447.64 0.00 0.00 4464.89 970.44 9175.04 00:32:05.297 =================================================================================================================== 00:32:05.297 Total : 3581.16 447.64 0.00 0.00 4464.89 970.44 9175.04 00:32:05.297 0 00:32:05.297 18:09:08 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:05.297 18:09:08 -- host/digest.sh@92 -- # get_accel_stats 00:32:05.297 18:09:08 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:05.297 18:09:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:05.297 18:09:08 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:05.297 | select(.opcode=="crc32c") 00:32:05.297 | "\(.module_name) \(.executed)"' 00:32:05.297 18:09:09 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:05.297 18:09:09 -- host/digest.sh@93 -- # exp_module=software 00:32:05.297 18:09:09 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:05.297 18:09:09 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:05.297 18:09:09 -- host/digest.sh@97 -- # killprocess 1872776 00:32:05.297 18:09:09 -- common/autotest_common.sh@926 -- # '[' -z 1872776 ']' 00:32:05.297 18:09:09 -- common/autotest_common.sh@930 -- # kill -0 1872776 00:32:05.297 18:09:09 -- common/autotest_common.sh@931 -- # uname 00:32:05.297 18:09:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.297 18:09:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1872776 00:32:05.297 18:09:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:05.297 18:09:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:05.297 18:09:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1872776' 00:32:05.297 killing process with pid 1872776 00:32:05.297 18:09:09 -- common/autotest_common.sh@945 -- # kill 1872776 00:32:05.297 Received shutdown signal, test time was about 2.000000 seconds 00:32:05.297 00:32:05.297 Latency(us) 00:32:05.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.297 =================================================================================================================== 00:32:05.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:05.297 18:09:09 -- common/autotest_common.sh@950 -- # wait 1872776 00:32:05.297 18:09:09 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:32:05.297 18:09:09 -- host/digest.sh@77 -- # local rw bs qd 00:32:05.297 18:09:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:05.297 18:09:09 -- host/digest.sh@80 -- # rw=randwrite 00:32:05.297 18:09:09 -- host/digest.sh@80 -- # bs=4096 00:32:05.297 18:09:09 -- host/digest.sh@80 -- # qd=128 00:32:05.297 18:09:09 -- host/digest.sh@82 -- # bperfpid=1873393 00:32:05.297 18:09:09 -- host/digest.sh@83 -- # waitforlisten 1873393 /var/tmp/bperf.sock 00:32:05.297 18:09:09 -- common/autotest_common.sh@819 -- # '[' -z 1873393 ']' 00:32:05.297 18:09:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:05.297 18:09:09 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:05.297 18:09:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.297 18:09:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:05.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:05.297 18:09:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.297 18:09:09 -- common/autotest_common.sh@10 -- # set +x 00:32:05.297 [2024-07-22 18:09:09.388216] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:05.297 [2024-07-22 18:09:09.388273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873393 ] 00:32:05.297 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.297 [2024-07-22 18:09:09.448952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.297 [2024-07-22 18:09:09.507916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.297 18:09:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:05.297 18:09:09 -- common/autotest_common.sh@852 -- # return 0 00:32:05.297 18:09:09 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:05.297 18:09:09 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:05.297 18:09:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:05.557 18:09:09 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:05.557 18:09:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:06.129 nvme0n1 00:32:06.129 18:09:10 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:06.129 18:09:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:06.129 Running I/O for 2 seconds... 00:32:08.042 00:32:08.042 Latency(us) 00:32:08.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.042 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.042 nvme0n1 : 2.00 24628.01 96.20 0.00 0.00 5190.04 2394.58 9275.86 00:32:08.042 =================================================================================================================== 00:32:08.042 Total : 24628.01 96.20 0.00 0.00 5190.04 2394.58 9275.86 00:32:08.042 0 00:32:08.042 18:09:12 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:08.042 18:09:12 -- host/digest.sh@92 -- # get_accel_stats 00:32:08.042 18:09:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:08.042 18:09:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:08.042 | select(.opcode=="crc32c") 00:32:08.042 | "\(.module_name) \(.executed)"' 00:32:08.042 18:09:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:08.303 18:09:12 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:08.303 18:09:12 -- host/digest.sh@93 -- # exp_module=software 00:32:08.303 18:09:12 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:08.303 18:09:12 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:08.303 18:09:12 -- host/digest.sh@97 -- # killprocess 1873393 00:32:08.303 18:09:12 -- common/autotest_common.sh@926 -- # '[' -z 1873393 ']' 00:32:08.303 18:09:12 -- common/autotest_common.sh@930 -- # kill -0 1873393 00:32:08.303 18:09:12 -- common/autotest_common.sh@931 -- # uname 00:32:08.303 18:09:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:08.303 18:09:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1873393 00:32:08.303 18:09:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:08.303 18:09:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:08.303 18:09:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1873393' 00:32:08.303 killing process with pid 1873393 00:32:08.303 18:09:12 -- common/autotest_common.sh@945 -- # kill 1873393 00:32:08.303 Received shutdown signal, test time was about 2.000000 seconds 00:32:08.303 00:32:08.303 Latency(us) 00:32:08.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.303 =================================================================================================================== 00:32:08.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:08.303 18:09:12 -- common/autotest_common.sh@950 -- # wait 1873393 00:32:08.564 18:09:12 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:32:08.564 18:09:12 -- host/digest.sh@77 -- # local rw bs qd 00:32:08.564 18:09:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:08.564 18:09:12 -- host/digest.sh@80 -- # rw=randwrite 00:32:08.564 18:09:12 -- host/digest.sh@80 -- # bs=131072 00:32:08.564 18:09:12 -- host/digest.sh@80 -- # qd=16 00:32:08.564 18:09:12 -- host/digest.sh@82 -- # bperfpid=1874451 00:32:08.564 18:09:12 -- host/digest.sh@83 -- # waitforlisten 1874451 /var/tmp/bperf.sock 00:32:08.564 18:09:12 -- common/autotest_common.sh@819 -- # '[' -z 1874451 ']' 00:32:08.564 18:09:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:08.564 18:09:12 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:08.564 18:09:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:08.564 18:09:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:08.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:08.564 18:09:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:08.564 18:09:12 -- common/autotest_common.sh@10 -- # set +x 00:32:08.564 [2024-07-22 18:09:12.691059] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:08.564 [2024-07-22 18:09:12.691111] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1874451 ] 00:32:08.564 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:08.564 Zero copy mechanism will not be used. 00:32:08.564 EAL: No free 2048 kB hugepages reported on node 1 00:32:08.564 [2024-07-22 18:09:12.752652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.564 [2024-07-22 18:09:12.811558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:08.825 18:09:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:08.825 18:09:12 -- common/autotest_common.sh@852 -- # return 0 00:32:08.825 18:09:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:32:08.825 18:09:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:32:08.825 18:09:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:08.825 18:09:13 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:08.825 18:09:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:09.443 nvme0n1 00:32:09.443 18:09:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:32:09.443 18:09:13 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:09.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:09.443 Zero copy mechanism will not be used. 00:32:09.443 Running I/O for 2 seconds... 00:32:11.989 00:32:11.989 Latency(us) 00:32:11.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.989 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:11.989 nvme0n1 : 2.00 5498.57 687.32 0.00 0.00 2904.21 1342.23 13812.97 00:32:11.989 =================================================================================================================== 00:32:11.989 Total : 5498.57 687.32 0.00 0.00 2904.21 1342.23 13812.97 00:32:11.989 0 00:32:11.989 18:09:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:32:11.989 18:09:15 -- host/digest.sh@92 -- # get_accel_stats 00:32:11.989 18:09:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:11.989 18:09:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:11.989 | select(.opcode=="crc32c") 00:32:11.989 | "\(.module_name) \(.executed)"' 00:32:11.989 18:09:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:11.989 18:09:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:32:11.989 18:09:15 -- host/digest.sh@93 -- # exp_module=software 00:32:11.989 18:09:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:32:11.989 18:09:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:11.989 18:09:15 -- host/digest.sh@97 -- # killprocess 1874451 00:32:11.989 18:09:15 -- common/autotest_common.sh@926 -- # '[' -z 1874451 ']' 00:32:11.989 18:09:15 -- common/autotest_common.sh@930 -- # kill -0 1874451 00:32:11.989 18:09:15 -- common/autotest_common.sh@931 -- # uname 00:32:11.989 18:09:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:11.989 18:09:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1874451 00:32:11.989 18:09:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:11.989 18:09:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:11.989 18:09:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1874451' 00:32:11.989 killing process with pid 1874451 00:32:11.989 18:09:15 -- common/autotest_common.sh@945 -- # kill 1874451 00:32:11.989 Received shutdown signal, test time was about 2.000000 seconds 00:32:11.989 00:32:11.989 Latency(us) 00:32:11.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.989 =================================================================================================================== 00:32:11.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:11.989 18:09:15 -- common/autotest_common.sh@950 -- # wait 1874451 00:32:11.989 18:09:16 -- host/digest.sh@126 -- # killprocess 1872003 00:32:11.989 18:09:16 -- common/autotest_common.sh@926 -- # '[' -z 1872003 ']' 00:32:11.989 18:09:16 -- common/autotest_common.sh@930 -- # kill -0 1872003 00:32:11.989 18:09:16 -- common/autotest_common.sh@931 -- # uname 00:32:11.989 18:09:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:11.989 18:09:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1872003 00:32:11.989 18:09:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:11.989 18:09:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:11.989 18:09:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1872003' 00:32:11.989 killing process with pid 1872003 00:32:11.989 18:09:16 -- common/autotest_common.sh@945 -- # kill 1872003 00:32:11.989 18:09:16 -- common/autotest_common.sh@950 -- # wait 1872003 00:32:11.989 00:32:11.989 real 0m15.187s 00:32:11.989 user 0m29.788s 00:32:11.989 sys 0m3.611s 00:32:11.989 18:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.989 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:32:11.989 ************************************ 00:32:11.989 END TEST nvmf_digest_clean 00:32:11.989 ************************************ 00:32:11.989 18:09:16 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:32:11.989 18:09:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:11.989 18:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.989 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:32:12.250 ************************************ 00:32:12.250 START TEST nvmf_digest_error 00:32:12.250 ************************************ 00:32:12.250 18:09:16 -- common/autotest_common.sh@1104 -- # run_digest_error 00:32:12.250 18:09:16 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:32:12.250 18:09:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:12.250 18:09:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:12.250 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:32:12.250 18:09:16 -- nvmf/common.sh@469 -- # nvmfpid=1875099 00:32:12.250 18:09:16 -- nvmf/common.sh@470 -- # waitforlisten 1875099 00:32:12.250 18:09:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:12.250 18:09:16 -- common/autotest_common.sh@819 -- # '[' -z 1875099 ']' 00:32:12.250 18:09:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:12.250 18:09:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:12.250 18:09:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:12.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:12.250 18:09:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:12.250 18:09:16 -- common/autotest_common.sh@10 -- # set +x 00:32:12.250 [2024-07-22 18:09:16.323346] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:12.250 [2024-07-22 18:09:16.323405] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:12.250 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.250 [2024-07-22 18:09:16.411818] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.250 [2024-07-22 18:09:16.474741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:12.250 [2024-07-22 18:09:16.474851] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:12.250 [2024-07-22 18:09:16.474858] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:12.250 [2024-07-22 18:09:16.474864] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:12.250 [2024-07-22 18:09:16.474880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.190 18:09:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:13.190 18:09:17 -- common/autotest_common.sh@852 -- # return 0 00:32:13.190 18:09:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:13.190 18:09:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:13.190 18:09:17 -- common/autotest_common.sh@10 -- # set +x 00:32:13.190 18:09:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.190 18:09:17 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:13.190 18:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.190 18:09:17 -- common/autotest_common.sh@10 -- # set +x 00:32:13.190 [2024-07-22 18:09:17.196912] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:13.190 18:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.190 18:09:17 -- host/digest.sh@104 -- # common_target_config 00:32:13.190 18:09:17 -- host/digest.sh@43 -- # rpc_cmd 00:32:13.190 18:09:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.190 18:09:17 -- common/autotest_common.sh@10 -- # set +x 00:32:13.190 null0 00:32:13.190 [2024-07-22 18:09:17.271888] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.190 [2024-07-22 18:09:17.296073] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.190 18:09:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.190 18:09:17 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:32:13.190 18:09:17 -- host/digest.sh@54 -- # local rw bs qd 00:32:13.190 18:09:17 -- host/digest.sh@56 -- # rw=randread 00:32:13.190 18:09:17 -- host/digest.sh@56 -- # bs=4096 00:32:13.190 18:09:17 -- host/digest.sh@56 -- # qd=128 00:32:13.190 18:09:17 -- host/digest.sh@58 -- # bperfpid=1875162 00:32:13.190 18:09:17 -- host/digest.sh@60 -- # waitforlisten 1875162 /var/tmp/bperf.sock 00:32:13.190 18:09:17 -- common/autotest_common.sh@819 -- # '[' -z 1875162 ']' 00:32:13.190 18:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:13.190 18:09:17 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:13.190 18:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:13.190 18:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:13.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:13.190 18:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:13.190 18:09:17 -- common/autotest_common.sh@10 -- # set +x 00:32:13.190 [2024-07-22 18:09:17.346618] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:13.190 [2024-07-22 18:09:17.346664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875162 ] 00:32:13.190 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.190 [2024-07-22 18:09:17.406811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.451 [2024-07-22 18:09:17.466165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:14.021 18:09:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:14.021 18:09:18 -- common/autotest_common.sh@852 -- # return 0 00:32:14.021 18:09:18 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:14.021 18:09:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:14.281 18:09:18 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:14.281 18:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.281 18:09:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.281 18:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.281 18:09:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.281 18:09:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:14.542 nvme0n1 00:32:14.542 18:09:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:14.542 18:09:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.542 18:09:18 -- common/autotest_common.sh@10 -- # set +x 00:32:14.542 18:09:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.542 18:09:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:14.542 18:09:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:14.542 Running I/O for 2 seconds... 00:32:14.542 [2024-07-22 18:09:18.817191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.542 [2024-07-22 18:09:18.817224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.542 [2024-07-22 18:09:18.817234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.803 [2024-07-22 18:09:18.831916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.803 [2024-07-22 18:09:18.831942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.803 [2024-07-22 18:09:18.831951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.803 [2024-07-22 18:09:18.846401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.803 [2024-07-22 18:09:18.846422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.803 [2024-07-22 18:09:18.846430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.803 [2024-07-22 18:09:18.860417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.803 [2024-07-22 18:09:18.860438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.803 [2024-07-22 18:09:18.860446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.803 [2024-07-22 18:09:18.875535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.803 [2024-07-22 18:09:18.875556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.803 [2024-07-22 18:09:18.875564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.803 [2024-07-22 18:09:18.890746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.803 [2024-07-22 18:09:18.890766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.803 [2024-07-22 18:09:18.890774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.905219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.905239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.905247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.920105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.920125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.920133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.934460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.934480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.934488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.948532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.948551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.948559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.963135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.963155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.963163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.977958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.977978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.977986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:18.992623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:18.992642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:18.992651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:19.006863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:19.006884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:19.006892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:19.021378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:19.021398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:19.021406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:19.036571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:19.036592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:19.036600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:19.051167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:19.051187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:19.051195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:14.804 [2024-07-22 18:09:19.065926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:14.804 [2024-07-22 18:09:19.065949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:14.804 [2024-07-22 18:09:19.065958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.080231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.080251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.080263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.094696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.094716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.094724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.109321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.109352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.123748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.123768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.123776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.138322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.138342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.138354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.152892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.152912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.152921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.167681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.167701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.167709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.182409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.182430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.182437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.197103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.197123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.197131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.211526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.211546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.211554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.226258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.226277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.226286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.241923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.241942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.241950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.256618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.256637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.256645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.270993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.271013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.271021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.286805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.286825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.286833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.301597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.301616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.301624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.316450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.316469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.316477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.066 [2024-07-22 18:09:19.331389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.066 [2024-07-22 18:09:19.331409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.066 [2024-07-22 18:09:19.331421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.327 [2024-07-22 18:09:19.346190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.327 [2024-07-22 18:09:19.346210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.346218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.360914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.360934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.360942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.376500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.376519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.376527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.391290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.391310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.391318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.406980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.407000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.407008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.422491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.422511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.422519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.436892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.436911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.436919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.452386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.452405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.452413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.468044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.468070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.468078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.483804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.483823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.483831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.498979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.498998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.499006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.513460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.513480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.513488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.528433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.528452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.528460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.543645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.543664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.543672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.558449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.558468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.558476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.573707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.573726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.573734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.328 [2024-07-22 18:09:19.588545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.328 [2024-07-22 18:09:19.588565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.328 [2024-07-22 18:09:19.588573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.604263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.604282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.604290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.619831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.619851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.619859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.634880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.634900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.634907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.649165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.649190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.649200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.664122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.664144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.664152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.678535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.678555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.678563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.693808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.693827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.693835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.709250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.709270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.709278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.723725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.723745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.723756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.739400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.739420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.739428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.754028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.754047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.754055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.768709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.768728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.768736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.784850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.784870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.784878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.799534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.799553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.799561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.814725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.814744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.814752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.830116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.830135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.830143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.845836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.845855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.845863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.589 [2024-07-22 18:09:19.860536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.589 [2024-07-22 18:09:19.860556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.589 [2024-07-22 18:09:19.860564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.875926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.875945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.875954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.891411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.891430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.891437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.906146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.906165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.906173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.920485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.920504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.920512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.935624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.935643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.935651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.951418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.951437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.951445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.965998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.966018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.966026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.981341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.981364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.981376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:19.996325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:19.996344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:19.996356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.011411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.011432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.011440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.025985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.026005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.026013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.040647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.040666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.040674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.056085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.056105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.056113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.071048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.071068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.071076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.086530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.086550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.086557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.101501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.101519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:15.850 [2024-07-22 18:09:20.116475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:15.850 [2024-07-22 18:09:20.116499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:15.850 [2024-07-22 18:09:20.116507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.131888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.131907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.131915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.147414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.147434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.147442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.162613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.162633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.162641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.178285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.178305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.178313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.193322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.193340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.207967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.207986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.207994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.223660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.223679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.223687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.238895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.238913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.238921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.254158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.254178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.254186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.269725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.269745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.269753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.284990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.285009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.285017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.299941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.299961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.299970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.315048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.315067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.315075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.329621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.329641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.329650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.345202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.345221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.345229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.360571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.360591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.360599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.111 [2024-07-22 18:09:20.375536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.111 [2024-07-22 18:09:20.375555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.111 [2024-07-22 18:09:20.375566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.390451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.390471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.390479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.405755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.405775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.420972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.420991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.420998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.435741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.435760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.435769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.449428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.449448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.449456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.464618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.464638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.464645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.479664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.479683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.479691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.494674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.494693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.494701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.509571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.509590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.509598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.524150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.524170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.524177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.538848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.538867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.538875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.553231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.553250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.553258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.568393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.568413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.568421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.583292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.583311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.583319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.597585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.597604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.597612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.611586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.611606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.611613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.626862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.626883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.626894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.372 [2024-07-22 18:09:20.641599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.372 [2024-07-22 18:09:20.641620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.372 [2024-07-22 18:09:20.641628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.656699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.656719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.656727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.671586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.671606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.671614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.686087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.686106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.686114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.700225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.700244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.700252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.715042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.715061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.715069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.730341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.730367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.730374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.632 [2024-07-22 18:09:20.744918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.632 [2024-07-22 18:09:20.744937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.632 [2024-07-22 18:09:20.744945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.633 [2024-07-22 18:09:20.764740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.633 [2024-07-22 18:09:20.764764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.633 [2024-07-22 18:09:20.764772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.633 [2024-07-22 18:09:20.779045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.633 [2024-07-22 18:09:20.779064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.633 [2024-07-22 18:09:20.779071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.633 [2024-07-22 18:09:20.793369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17d9090) 00:32:16.633 [2024-07-22 18:09:20.793388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:16.633 [2024-07-22 18:09:20.793396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:16.633 00:32:16.633 Latency(us) 00:32:16.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.633 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:16.633 nvme0n1 : 2.04 16720.22 65.31 0.00 0.00 7496.88 2495.41 47589.22 00:32:16.633 =================================================================================================================== 00:32:16.633 Total : 16720.22 65.31 0.00 0.00 7496.88 2495.41 47589.22 00:32:16.633 0 00:32:16.633 18:09:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:16.633 18:09:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:16.633 18:09:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:16.633 | .driver_specific 00:32:16.633 | .nvme_error 00:32:16.633 | .status_code 00:32:16.633 | .command_transient_transport_error' 00:32:16.633 18:09:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:16.893 18:09:21 -- host/digest.sh@71 -- # (( 133 > 0 )) 00:32:16.893 18:09:21 -- host/digest.sh@73 -- # killprocess 1875162 00:32:16.893 18:09:21 -- common/autotest_common.sh@926 -- # '[' -z 1875162 ']' 00:32:16.893 18:09:21 -- common/autotest_common.sh@930 -- # kill -0 1875162 00:32:16.893 18:09:21 -- common/autotest_common.sh@931 -- # uname 00:32:16.893 18:09:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:16.893 18:09:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1875162 00:32:16.893 18:09:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:16.893 18:09:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:16.893 18:09:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1875162' 00:32:16.893 killing process with pid 1875162 00:32:16.893 18:09:21 -- common/autotest_common.sh@945 -- # kill 1875162 00:32:16.893 Received shutdown signal, test time was about 2.000000 seconds 00:32:16.893 00:32:16.893 Latency(us) 00:32:16.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:16.893 =================================================================================================================== 00:32:16.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:16.893 18:09:21 -- common/autotest_common.sh@950 -- # wait 1875162 00:32:17.154 18:09:21 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:32:17.154 18:09:21 -- host/digest.sh@54 -- # local rw bs qd 00:32:17.154 18:09:21 -- host/digest.sh@56 -- # rw=randread 00:32:17.154 18:09:21 -- host/digest.sh@56 -- # bs=131072 00:32:17.154 18:09:21 -- host/digest.sh@56 -- # qd=16 00:32:17.154 18:09:21 -- host/digest.sh@58 -- # bperfpid=1875887 00:32:17.154 18:09:21 -- host/digest.sh@60 -- # waitforlisten 1875887 /var/tmp/bperf.sock 00:32:17.154 18:09:21 -- common/autotest_common.sh@819 -- # '[' -z 1875887 ']' 00:32:17.154 18:09:21 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:32:17.154 18:09:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:17.154 18:09:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:17.154 18:09:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:17.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:17.154 18:09:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:17.154 18:09:21 -- common/autotest_common.sh@10 -- # set +x 00:32:17.154 [2024-07-22 18:09:21.282545] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:17.154 [2024-07-22 18:09:21.282617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1875887 ] 00:32:17.154 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:17.154 Zero copy mechanism will not be used. 00:32:17.154 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.154 [2024-07-22 18:09:21.346060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.154 [2024-07-22 18:09:21.405255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:18.092 18:09:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:18.092 18:09:22 -- common/autotest_common.sh@852 -- # return 0 00:32:18.092 18:09:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:18.092 18:09:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:18.092 18:09:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:18.092 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:18.092 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:32:18.092 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:18.092 18:09:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.092 18:09:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:18.351 nvme0n1 00:32:18.351 18:09:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:18.351 18:09:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:18.351 18:09:22 -- common/autotest_common.sh@10 -- # set +x 00:32:18.351 18:09:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:18.351 18:09:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:18.351 18:09:22 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:18.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:18.611 Zero copy mechanism will not be used. 00:32:18.611 Running I/O for 2 seconds... 00:32:18.611 [2024-07-22 18:09:22.714037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.714074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.714084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.723007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.723031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.723040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.731843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.731865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.731873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.741366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.741387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.741395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.749874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.749895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.749903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.758941] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.758961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.758969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.768290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.768310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.768318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.779035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.779055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.779063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.791055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.791075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.801435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.801454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.801462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.611 [2024-07-22 18:09:22.809773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.611 [2024-07-22 18:09:22.809793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.611 [2024-07-22 18:09:22.809805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.817545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.817565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.817573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.827383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.827404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.827412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.832865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.832885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.832893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.840409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.840429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.840437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.847436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.847456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.847464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.852019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.852039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.852047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.860337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.860361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.860369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.869402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.869422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.869430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.877177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.877202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.877210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.612 [2024-07-22 18:09:22.885904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.612 [2024-07-22 18:09:22.885925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.612 [2024-07-22 18:09:22.885933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.893135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.893155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.893163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.901344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.901369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.901377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.909226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.909248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.909257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.917038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.917058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.917066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.923933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.923954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.923962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.930074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.930094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.930102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.939103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.939124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.939131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.947143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.947163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.947171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.951693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.951714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.951721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.957980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.958000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.958008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.964552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.964572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.964580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.973179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.973200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.973208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.980809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.980829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.980837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.988483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.988503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.988511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.872 [2024-07-22 18:09:22.994803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.872 [2024-07-22 18:09:22.994824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.872 [2024-07-22 18:09:22.994832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.001268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.001292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.001300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.007575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.007595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.007603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.013598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.013618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.013626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.022296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.022316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.022323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.029591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.029611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.029619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.036635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.036654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.036662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.045044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.045064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.045072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.053188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.053209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.053216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.059139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.059168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.063943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.063964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.063972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.069821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.069842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.069850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.077249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.077269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.077278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.084343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.084368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.084377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.091887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.091908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.091916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.099195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.099215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.099223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.104726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.104746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.104754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.113288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.113307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.113315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.122936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.122956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.122969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.131115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.131135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.131143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.137798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.137818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.137826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:18.873 [2024-07-22 18:09:23.147523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:18.873 [2024-07-22 18:09:23.147542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:18.873 [2024-07-22 18:09:23.147550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.134 [2024-07-22 18:09:23.155066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.134 [2024-07-22 18:09:23.155086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.134 [2024-07-22 18:09:23.155094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.134 [2024-07-22 18:09:23.163834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.134 [2024-07-22 18:09:23.163855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.134 [2024-07-22 18:09:23.163864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.172148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.172168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.172176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.181670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.181690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.181699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.192820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.192840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.192848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.202717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.202741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.202750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.211841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.211861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.211869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.219567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.219587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.219595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.227166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.227186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.227195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.233508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.233528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.233536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.239276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.239297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.239305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.249367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.249387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.249395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.257376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.257396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.257404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.263368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.263389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.263397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.268840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.268860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.268869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.277796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.277817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.277825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.285244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.285265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.285273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.290250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.290271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.290279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.295240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.295261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.295269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.300998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.301020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.301027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.306247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.306268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.306276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.311735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.311756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.311764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.319926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.319946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.319957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.327679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.327700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.327707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.336998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.337020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.337028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.345487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.345508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.345515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.355516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.355537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.355545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.364117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.364137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.364145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.135 [2024-07-22 18:09:23.370713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.135 [2024-07-22 18:09:23.370733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.135 [2024-07-22 18:09:23.370741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.136 [2024-07-22 18:09:23.382194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.136 [2024-07-22 18:09:23.382215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.136 [2024-07-22 18:09:23.382223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.136 [2024-07-22 18:09:23.391165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.136 [2024-07-22 18:09:23.391191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.136 [2024-07-22 18:09:23.391201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.136 [2024-07-22 18:09:23.400020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.136 [2024-07-22 18:09:23.400048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.136 [2024-07-22 18:09:23.400056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.136 [2024-07-22 18:09:23.406673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.136 [2024-07-22 18:09:23.406694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.136 [2024-07-22 18:09:23.406702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.417080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.417101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.417109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.425752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.425773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.425782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.436519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.436540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.436548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.446027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.446048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.446055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.455460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.455481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.455489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.464652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.464672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.464680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.474080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.474108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.482751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.482771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.482779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.490397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.397 [2024-07-22 18:09:23.490417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.397 [2024-07-22 18:09:23.490425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.397 [2024-07-22 18:09:23.498108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.498127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.498135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.503429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.503449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.503457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.511494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.511513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.511521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.520932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.520952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.520960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.530920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.530941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.530948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.540591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.540610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.540618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.550655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.550679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.550687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.560240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.560260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.560268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.568572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.568592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.568600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.578219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.578240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.578247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.586919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.586939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.586947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.596064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.596084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.596092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.603470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.603490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.603498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.613223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.613243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.613251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.621694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.621714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.621722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.630540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.630560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.630568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.639875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.639895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.639903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.648041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.648062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.648069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.655061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.655081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.655089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.398 [2024-07-22 18:09:23.663316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.398 [2024-07-22 18:09:23.663336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.398 [2024-07-22 18:09:23.663344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.673860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.673881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.673889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.681798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.681818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.681826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.689339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.689365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.689375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.697935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.697956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.697969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.707818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.707838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.707846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.718943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.718963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.718970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.730964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.730983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.730991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.741327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.741347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.741360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.750509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.750529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.750537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.761910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.761930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.761938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.774223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.774242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.774250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.785491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.785511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.785519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.660 [2024-07-22 18:09:23.795427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.660 [2024-07-22 18:09:23.795451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.660 [2024-07-22 18:09:23.795459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.804956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.804976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.804983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.812591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.812610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.812618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.820695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.820714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.820722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.828337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.828362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.828370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.835012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.835032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.835040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.842755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.842775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.842783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.850436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.850455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.850462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.858187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.858208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.858217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.865483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.865503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.865511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.872781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.872802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.872810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.881272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.881292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.881300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.887830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.887849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.887857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.892860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.892880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.892888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.896194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.896214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.896222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.903682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.903701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.903709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.911460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.911480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.911488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.919411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.919434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.919442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.661 [2024-07-22 18:09:23.927930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.661 [2024-07-22 18:09:23.927951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.661 [2024-07-22 18:09:23.927959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.937160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.937180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.937188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.946216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.946237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.946245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.953582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.953609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.953617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.962314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.962335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.962343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.970359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.970379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.970387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.980726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.980747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.988770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.988791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.988798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:23.998649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:23.998669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:23.998677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.007377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.007396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.007404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.015552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.015572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.015579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.023475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.023496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.023503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.031473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.031493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.031500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.039715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.039743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.048189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.048208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.048216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.056500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.056519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.056527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.065686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.065707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.065718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.074197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.074217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.074225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.081794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.081813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.923 [2024-07-22 18:09:24.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.923 [2024-07-22 18:09:24.086929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.923 [2024-07-22 18:09:24.086949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.086956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.093787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.093807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.093814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.101814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.101834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.101842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.111224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.111244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.111252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.119549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.119569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.119577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.127780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.127801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.127809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.137545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.137570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.137577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.145691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.145711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.145719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.154618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.154638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.154646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.163856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.163876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.163884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.173175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.173196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.173203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.181456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.181476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.181484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:19.924 [2024-07-22 18:09:24.189146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:19.924 [2024-07-22 18:09:24.189166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:19.924 [2024-07-22 18:09:24.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.198156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.198178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.198186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.207772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.207791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.207804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.214999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.215019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.215027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.223671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.223691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.223699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.231815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.231835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.240203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.240223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.240231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.248174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.248194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.248202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.256838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.186 [2024-07-22 18:09:24.256858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.186 [2024-07-22 18:09:24.256866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.186 [2024-07-22 18:09:24.265385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.265412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.265420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.274412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.274432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.274440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.282512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.282536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.282544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.292167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.292188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.292196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.302397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.302418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.302425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.308967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.308988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.308995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.317526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.317546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.317553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.325850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.325871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.325879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.334942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.334963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.334971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.344286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.344308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.344315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.352099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.352119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.352127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.362268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.362288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.362296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.370790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.370810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.370818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.379805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.379826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.379833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.388140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.388160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.388168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.399712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.399733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.399741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.409109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.409129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.409137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.418553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.418573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.418581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.428753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.428774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.428782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.438574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.438595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.438606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.448261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.448282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.448290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.187 [2024-07-22 18:09:24.455404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.187 [2024-07-22 18:09:24.455425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.187 [2024-07-22 18:09:24.455432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.464807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.464827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.464835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.472265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.472286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.472293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.478190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.478211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.478219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.483605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.483624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.483632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.490948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.490969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.490977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.497095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.497117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.497124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.501708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.501729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.501737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.506295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.506315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.506323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.512204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.512224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.512232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.517322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.517343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.517356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.526489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.526510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.526517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.533760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.533788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.543015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.543035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.543043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.551003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.551023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.551031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.561577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.561597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.561609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.570233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.570253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.570261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.578913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.578933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.578941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.588475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.588495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.588503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.449 [2024-07-22 18:09:24.595207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.449 [2024-07-22 18:09:24.595227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.449 [2024-07-22 18:09:24.595235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.603002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.603022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.612526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.612546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.612554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.621951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.621971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.621979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.631096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.631118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.631127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.642610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.642639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.642649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.651812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.651834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.651842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.661467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.661488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.661496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.673615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.673636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.673644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.686241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.686262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.686270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.696531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.696550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.696558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:20.450 [2024-07-22 18:09:24.705888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1106d90) 00:32:20.450 [2024-07-22 18:09:24.705909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:20.450 [2024-07-22 18:09:24.705916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:20.450 00:32:20.450 Latency(us) 00:32:20.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.450 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:20.450 nvme0n1 : 2.00 3730.08 466.26 0.00 0.00 4284.88 636.46 12804.73 00:32:20.450 =================================================================================================================== 00:32:20.450 Total : 3730.08 466.26 0.00 0.00 4284.88 636.46 12804.73 00:32:20.450 0 00:32:20.711 18:09:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:20.711 18:09:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:20.711 18:09:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:20.711 | .driver_specific 00:32:20.711 | .nvme_error 00:32:20.711 | .status_code 00:32:20.711 | .command_transient_transport_error' 00:32:20.711 18:09:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:20.711 18:09:24 -- host/digest.sh@71 -- # (( 241 > 0 )) 00:32:20.711 18:09:24 -- host/digest.sh@73 -- # killprocess 1875887 00:32:20.711 18:09:24 -- common/autotest_common.sh@926 -- # '[' -z 1875887 ']' 00:32:20.711 18:09:24 -- common/autotest_common.sh@930 -- # kill -0 1875887 00:32:20.711 18:09:24 -- common/autotest_common.sh@931 -- # uname 00:32:20.711 18:09:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:20.711 18:09:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1875887 00:32:20.711 18:09:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:20.711 18:09:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:20.711 18:09:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1875887' 00:32:20.711 killing process with pid 1875887 00:32:20.711 18:09:24 -- common/autotest_common.sh@945 -- # kill 1875887 00:32:20.711 Received shutdown signal, test time was about 2.000000 seconds 00:32:20.711 00:32:20.711 Latency(us) 00:32:20.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.711 =================================================================================================================== 00:32:20.711 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:20.711 18:09:24 -- common/autotest_common.sh@950 -- # wait 1875887 00:32:20.972 18:09:25 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:32:20.972 18:09:25 -- host/digest.sh@54 -- # local rw bs qd 00:32:20.972 18:09:25 -- host/digest.sh@56 -- # rw=randwrite 00:32:20.972 18:09:25 -- host/digest.sh@56 -- # bs=4096 00:32:20.972 18:09:25 -- host/digest.sh@56 -- # qd=128 00:32:20.972 18:09:25 -- host/digest.sh@58 -- # bperfpid=1876576 00:32:20.972 18:09:25 -- host/digest.sh@60 -- # waitforlisten 1876576 /var/tmp/bperf.sock 00:32:20.972 18:09:25 -- common/autotest_common.sh@819 -- # '[' -z 1876576 ']' 00:32:20.972 18:09:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:20.972 18:09:25 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:32:20.972 18:09:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:20.972 18:09:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:20.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:20.972 18:09:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:20.972 18:09:25 -- common/autotest_common.sh@10 -- # set +x 00:32:20.972 [2024-07-22 18:09:25.143420] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:20.972 [2024-07-22 18:09:25.143477] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876576 ] 00:32:20.972 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.972 [2024-07-22 18:09:25.203488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:21.233 [2024-07-22 18:09:25.262531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.803 18:09:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:21.803 18:09:25 -- common/autotest_common.sh@852 -- # return 0 00:32:21.803 18:09:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:21.803 18:09:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:22.063 18:09:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:22.063 18:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:22.064 18:09:26 -- common/autotest_common.sh@10 -- # set +x 00:32:22.064 18:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:22.064 18:09:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:22.064 18:09:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:22.324 nvme0n1 00:32:22.585 18:09:26 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:22.585 18:09:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:22.585 18:09:26 -- common/autotest_common.sh@10 -- # set +x 00:32:22.585 18:09:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:22.585 18:09:26 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:22.585 18:09:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:22.585 Running I/O for 2 seconds... 00:32:22.585 [2024-07-22 18:09:26.731346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f3e60 00:32:22.585 [2024-07-22 18:09:26.731748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.731778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.743237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7538 00:32:22.585 [2024-07-22 18:09:26.744692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.744712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.753794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ebb98 00:32:22.585 [2024-07-22 18:09:26.755253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.755273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.764322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f8618 00:32:22.585 [2024-07-22 18:09:26.765771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.765790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.774879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4de8 00:32:22.585 [2024-07-22 18:09:26.776371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.776390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.785648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f81e0 00:32:22.585 [2024-07-22 18:09:26.786420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.794808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4578 00:32:22.585 [2024-07-22 18:09:26.795947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.795966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.805344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ef6a8 00:32:22.585 [2024-07-22 18:09:26.806532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.806551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.816165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2d80 00:32:22.585 [2024-07-22 18:09:26.817477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.817495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.826647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6890 00:32:22.585 [2024-07-22 18:09:26.827736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.827755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.837188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6890 00:32:22.585 [2024-07-22 18:09:26.838484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.838503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.847669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6890 00:32:22.585 [2024-07-22 18:09:26.848713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.848731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:22.585 [2024-07-22 18:09:26.858140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6890 00:32:22.585 [2024-07-22 18:09:26.859281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.585 [2024-07-22 18:09:26.859300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:22.846 [2024-07-22 18:09:26.868585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e6738 00:32:22.846 [2024-07-22 18:09:26.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.846 [2024-07-22 18:09:26.869803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:22.846 [2024-07-22 18:09:26.879058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ea248 00:32:22.846 [2024-07-22 18:09:26.880328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.880347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.888986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eee38 00:32:22.847 [2024-07-22 18:09:26.889849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.889872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.899521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5658 00:32:22.847 [2024-07-22 18:09:26.900630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.900649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.910024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7538 00:32:22.847 [2024-07-22 18:09:26.910922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.910940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.920530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e84c0 00:32:22.847 [2024-07-22 18:09:26.921446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.921464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.931035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f57b0 00:32:22.847 [2024-07-22 18:09:26.932192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.932211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.941843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f4f40 00:32:22.847 [2024-07-22 18:09:26.942838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.942856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.952334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2d80 00:32:22.847 [2024-07-22 18:09:26.953384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.953403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.962818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2d80 00:32:22.847 [2024-07-22 18:09:26.964081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.964100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.973298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f4f40 00:32:22.847 [2024-07-22 18:09:26.974384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.974403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.984048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fa7d8 00:32:22.847 [2024-07-22 18:09:26.985269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.985288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:26.993969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eb328 00:32:22.847 [2024-07-22 18:09:26.994383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:26.994401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.004844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f8e88 00:32:22.847 [2024-07-22 18:09:27.005627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.005645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.015543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ed0b0 00:32:22.847 [2024-07-22 18:09:27.016533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.016551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.026394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fa7d8 00:32:22.847 [2024-07-22 18:09:27.027602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.027621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.036908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f0788 00:32:22.847 [2024-07-22 18:09:27.038119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.038138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.047394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eff18 00:32:22.847 [2024-07-22 18:09:27.048854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.048873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.057880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e7818 00:32:22.847 [2024-07-22 18:09:27.059133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.059152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.068383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e9e10 00:32:22.847 [2024-07-22 18:09:27.069827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.069846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.078534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f1ca0 00:32:22.847 [2024-07-22 18:09:27.079315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.079333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.089185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e2c28 00:32:22.847 [2024-07-22 18:09:27.090267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.090286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.099727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eff18 00:32:22.847 [2024-07-22 18:09:27.100740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.100758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.109653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ff3c8 00:32:22.847 [2024-07-22 18:09:27.110037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.110055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:22.847 [2024-07-22 18:09:27.120695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5a90 00:32:22.847 [2024-07-22 18:09:27.121244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:22.847 [2024-07-22 18:09:27.121262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.131369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e3d08 00:32:23.108 [2024-07-22 18:09:27.132324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.132343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.141870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fe720 00:32:23.108 [2024-07-22 18:09:27.142871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.142893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.152369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190efae0 00:32:23.108 [2024-07-22 18:09:27.153614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.153633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.162922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e38d0 00:32:23.108 [2024-07-22 18:09:27.164185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.164207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.173421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6890 00:32:23.108 [2024-07-22 18:09:27.174716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.174734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.183898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f92c0 00:32:23.108 [2024-07-22 18:09:27.185023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.185041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.194394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e6b70 00:32:23.108 [2024-07-22 18:09:27.195765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.195784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.204889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e84c0 00:32:23.108 [2024-07-22 18:09:27.206062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.206080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.215872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f8618 00:32:23.108 [2024-07-22 18:09:27.217174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.108 [2024-07-22 18:09:27.217192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:23.108 [2024-07-22 18:09:27.226863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fef90 00:32:23.108 [2024-07-22 18:09:27.228347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.228369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.237646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7da8 00:32:23.109 [2024-07-22 18:09:27.239023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.239041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.248377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5a90 00:32:23.109 [2024-07-22 18:09:27.249515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.249533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.257492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f8618 00:32:23.109 [2024-07-22 18:09:27.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.258606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.267934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f3a28 00:32:23.109 [2024-07-22 18:09:27.269062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.269080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.278374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e9e10 00:32:23.109 [2024-07-22 18:09:27.279468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.279486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.288836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2d80 00:32:23.109 [2024-07-22 18:09:27.289661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.289679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.299282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4578 00:32:23.109 [2024-07-22 18:09:27.300197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.300215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.310407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f0350 00:32:23.109 [2024-07-22 18:09:27.311539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.311557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.321613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f8e88 00:32:23.109 [2024-07-22 18:09:27.322329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.322347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.331999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4140 00:32:23.109 [2024-07-22 18:09:27.332770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.332788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.342551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fdeb0 00:32:23.109 [2024-07-22 18:09:27.343316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.343334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.352999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7970 00:32:23.109 [2024-07-22 18:09:27.353778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.353796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.363479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6458 00:32:23.109 [2024-07-22 18:09:27.364166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.364184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:23.109 [2024-07-22 18:09:27.374032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eaab8 00:32:23.109 [2024-07-22 18:09:27.374788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.109 [2024-07-22 18:09:27.374806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.384333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2510 00:32:23.371 [2024-07-22 18:09:27.385034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.385052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.394920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2948 00:32:23.371 [2024-07-22 18:09:27.395535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.395553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.405346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6020 00:32:23.371 [2024-07-22 18:09:27.405986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.406004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.415641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f0ff8 00:32:23.371 [2024-07-22 18:09:27.416464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.416482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.425574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1710 00:32:23.371 [2024-07-22 18:09:27.426604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.426622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.436205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e3060 00:32:23.371 [2024-07-22 18:09:27.437065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.437084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.446842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e3060 00:32:23.371 [2024-07-22 18:09:27.447829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.447847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.457296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e3060 00:32:23.371 [2024-07-22 18:09:27.458297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.458315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.467800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e3060 00:32:23.371 [2024-07-22 18:09:27.469012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.469030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.478311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f20d8 00:32:23.371 [2024-07-22 18:09:27.479397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.479416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.488810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ef270 00:32:23.371 [2024-07-22 18:09:27.489906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.489924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.499294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f5378 00:32:23.371 [2024-07-22 18:09:27.500634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.500652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.509793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1b48 00:32:23.371 [2024-07-22 18:09:27.511124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.511142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.520286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7100 00:32:23.371 [2024-07-22 18:09:27.521459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.521477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.530814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eaab8 00:32:23.371 [2024-07-22 18:09:27.532011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.532032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:23.371 [2024-07-22 18:09:27.541335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ee5c8 00:32:23.371 [2024-07-22 18:09:27.542572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.371 [2024-07-22 18:09:27.542590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.551806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eaab8 00:32:23.372 [2024-07-22 18:09:27.553051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.553069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.562278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6890 00:32:23.372 [2024-07-22 18:09:27.563537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.563556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.572796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4140 00:32:23.372 [2024-07-22 18:09:27.574267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.574285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.583280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2d80 00:32:23.372 [2024-07-22 18:09:27.584564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.584582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.594084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e7c50 00:32:23.372 [2024-07-22 18:09:27.595330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.595352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.604650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5a90 00:32:23.372 [2024-07-22 18:09:27.605907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.605925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.615162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ee5c8 00:32:23.372 [2024-07-22 18:09:27.616435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.616453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.626088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eff18 00:32:23.372 [2024-07-22 18:09:27.627151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.627170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.635305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:23.372 [2024-07-22 18:09:27.636219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.372 [2024-07-22 18:09:27.636237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:23.372 [2024-07-22 18:09:27.645790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:23.632 [2024-07-22 18:09:27.646734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.646752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.656272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:23.633 [2024-07-22 18:09:27.657216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.657234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.666703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fb048 00:32:23.633 [2024-07-22 18:09:27.667776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.667795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.677144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:23.633 [2024-07-22 18:09:27.678044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.678062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.689003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f5378 00:32:23.633 [2024-07-22 18:09:27.690128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.690147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.699440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e27f0 00:32:23.633 [2024-07-22 18:09:27.700561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.700580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.709884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:23.633 [2024-07-22 18:09:27.710984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.711003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.720338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f1430 00:32:23.633 [2024-07-22 18:09:27.721447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.721467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.730762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e6738 00:32:23.633 [2024-07-22 18:09:27.731779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.731797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.741247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ed4e8 00:32:23.633 [2024-07-22 18:09:27.742283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.742301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.751705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f3a28 00:32:23.633 [2024-07-22 18:09:27.752719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.752737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.762168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4578 00:32:23.633 [2024-07-22 18:09:27.763158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.763176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.772631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f92c0 00:32:23.633 [2024-07-22 18:09:27.773373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.773392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.783079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f1430 00:32:23.633 [2024-07-22 18:09:27.783805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.783824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.793538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e27f0 00:32:23.633 [2024-07-22 18:09:27.794254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.794273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.804023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eff18 00:32:23.633 [2024-07-22 18:09:27.804812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.804834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.814467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4578 00:32:23.633 [2024-07-22 18:09:27.815074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.815093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.824919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f3e60 00:32:23.633 [2024-07-22 18:09:27.825536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.825554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.835383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f81e0 00:32:23.633 [2024-07-22 18:09:27.836838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.836856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.845651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f81e0 00:32:23.633 [2024-07-22 18:09:27.846932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.846951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.856074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e99d8 00:32:23.633 [2024-07-22 18:09:27.857285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.857303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.866586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e9e10 00:32:23.633 [2024-07-22 18:09:27.867827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.867845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.877082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190efae0 00:32:23.633 [2024-07-22 18:09:27.878333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.878355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.887865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e27f0 00:32:23.633 [2024-07-22 18:09:27.889081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.889099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:23.633 [2024-07-22 18:09:27.898635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f2948 00:32:23.633 [2024-07-22 18:09:27.899942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.633 [2024-07-22 18:09:27.899960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.908782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e6300 00:32:23.895 [2024-07-22 18:09:27.909231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.909249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.918779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7970 00:32:23.895 [2024-07-22 18:09:27.920427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.920446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.929888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f4b08 00:32:23.895 [2024-07-22 18:09:27.931293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.931312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.940351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6020 00:32:23.895 [2024-07-22 18:09:27.941583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.941602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.950846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e99d8 00:32:23.895 [2024-07-22 18:09:27.952076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.952094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.961347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e73e0 00:32:23.895 [2024-07-22 18:09:27.962615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.962634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.971843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f96f8 00:32:23.895 [2024-07-22 18:09:27.973154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.973172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.982367] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ea248 00:32:23.895 [2024-07-22 18:09:27.983437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.983455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:27.992876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f5378 00:32:23.895 [2024-07-22 18:09:27.994086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:27.994104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.003391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f5378 00:32:23.895 [2024-07-22 18:09:28.004568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.004586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.013890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f3e60 00:32:23.895 [2024-07-22 18:09:28.015084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.015102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.024362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e38d0 00:32:23.895 [2024-07-22 18:09:28.025559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.025578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.034862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eaab8 00:32:23.895 [2024-07-22 18:09:28.036065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.036083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.045275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7da8 00:32:23.895 [2024-07-22 18:09:28.046375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.046393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.055812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f6020 00:32:23.895 [2024-07-22 18:09:28.057219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.057237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.066269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e6300 00:32:23.895 [2024-07-22 18:09:28.067599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.067617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.076722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5a90 00:32:23.895 [2024-07-22 18:09:28.078082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.078103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.087168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190edd58 00:32:23.895 [2024-07-22 18:09:28.088593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.088612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.097610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e9e10 00:32:23.895 [2024-07-22 18:09:28.098815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.108035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fa7d8 00:32:23.895 [2024-07-22 18:09:28.109705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.109723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.117604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5a90 00:32:23.895 [2024-07-22 18:09:28.118487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.118505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.128103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190edd58 00:32:23.895 [2024-07-22 18:09:28.129021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.129039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.138874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f20d8 00:32:23.895 [2024-07-22 18:09:28.139093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.139111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.150792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e49b0 00:32:23.895 [2024-07-22 18:09:28.151925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.151943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:23.895 [2024-07-22 18:09:28.160208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4de8 00:32:23.895 [2024-07-22 18:09:28.160682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:23.895 [2024-07-22 18:09:28.160701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.170770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:24.157 [2024-07-22 18:09:28.171596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.171614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.181263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f4b08 00:32:24.157 [2024-07-22 18:09:28.182103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.182122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.191741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f5378 00:32:24.157 [2024-07-22 18:09:28.192620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.192638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.202229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e8088 00:32:24.157 [2024-07-22 18:09:28.203100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.203118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.212696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.213589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.213607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.223380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.224308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.224326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.233903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.234814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.234832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.244404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.245317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.245335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.254889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.255805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.255823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.265385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.266377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.266395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.275895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1f80 00:32:24.157 [2024-07-22 18:09:28.276850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.276868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.286385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:24.157 [2024-07-22 18:09:28.287541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.287559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.296894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:24.157 [2024-07-22 18:09:28.297940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.297958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.307380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:24.157 [2024-07-22 18:09:28.308335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.308357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.317872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:24.157 [2024-07-22 18:09:28.318932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.318950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.328397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:24.157 [2024-07-22 18:09:28.329440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.329458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.338916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fac10 00:32:24.157 [2024-07-22 18:09:28.339963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.339981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.349487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5ec8 00:32:24.157 [2024-07-22 18:09:28.350544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.350565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.359984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5ec8 00:32:24.157 [2024-07-22 18:09:28.360877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.360895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.370466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5ec8 00:32:24.157 [2024-07-22 18:09:28.371375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.371393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.380930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:24.157 [2024-07-22 18:09:28.381962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.157 [2024-07-22 18:09:28.381981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:24.157 [2024-07-22 18:09:28.391396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:24.157 [2024-07-22 18:09:28.392428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.158 [2024-07-22 18:09:28.392451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:24.158 [2024-07-22 18:09:28.401865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:24.158 [2024-07-22 18:09:28.402893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.158 [2024-07-22 18:09:28.402912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:24.158 [2024-07-22 18:09:28.412330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:24.158 [2024-07-22 18:09:28.413372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.158 [2024-07-22 18:09:28.413390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:24.158 [2024-07-22 18:09:28.422815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5ec8 00:32:24.158 [2024-07-22 18:09:28.423895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.158 [2024-07-22 18:09:28.423913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:24.419 [2024-07-22 18:09:28.433279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e88f8 00:32:24.419 [2024-07-22 18:09:28.434382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.434400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.443738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e1b48 00:32:24.420 [2024-07-22 18:09:28.444954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.444973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.454205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e99d8 00:32:24.420 [2024-07-22 18:09:28.455229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.455247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.464792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e99d8 00:32:24.420 [2024-07-22 18:09:28.465716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.465734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.475275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e99d8 00:32:24.420 [2024-07-22 18:09:28.476212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.476230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.485606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f20d8 00:32:24.420 [2024-07-22 18:09:28.485951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.485970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.496427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f8618 00:32:24.420 [2024-07-22 18:09:28.497335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.497357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.506948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ea680 00:32:24.420 [2024-07-22 18:09:28.507910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.507928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.517439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190edd58 00:32:24.420 [2024-07-22 18:09:28.518425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.518443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.527942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190edd58 00:32:24.420 [2024-07-22 18:09:28.528958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.528976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.538490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e4de8 00:32:24.420 [2024-07-22 18:09:28.539552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.539571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.549000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e38d0 00:32:24.420 [2024-07-22 18:09:28.550284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.550303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.559532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7100 00:32:24.420 [2024-07-22 18:09:28.560631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.560649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.570017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7100 00:32:24.420 [2024-07-22 18:09:28.571151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.580517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7100 00:32:24.420 [2024-07-22 18:09:28.581686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.581705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.591014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190fda78 00:32:24.420 [2024-07-22 18:09:28.592185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.592204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.601844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e9e10 00:32:24.420 [2024-07-22 18:09:28.602989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.603007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.612309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f4f40 00:32:24.420 [2024-07-22 18:09:28.613423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.613441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.622803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e5ec8 00:32:24.420 [2024-07-22 18:09:28.623950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.623972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.633360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ea248 00:32:24.420 [2024-07-22 18:09:28.634527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.634546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.645283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f9b30 00:32:24.420 [2024-07-22 18:09:28.647037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.647056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.654189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7970 00:32:24.420 [2024-07-22 18:09:28.655338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.655360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.664970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eaab8 00:32:24.420 [2024-07-22 18:09:28.665725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.665743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.675487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eaab8 00:32:24.420 [2024-07-22 18:09:28.676269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.676287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:24.420 [2024-07-22 18:09:28.685998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190e3060 00:32:24.420 [2024-07-22 18:09:28.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.420 [2024-07-22 18:09:28.686802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:24.681 [2024-07-22 18:09:28.696467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190eea00 00:32:24.681 [2024-07-22 18:09:28.697365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.681 [2024-07-22 18:09:28.697384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:24.681 [2024-07-22 18:09:28.707591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190f7538 00:32:24.681 [2024-07-22 18:09:28.708754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.681 [2024-07-22 18:09:28.708772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:24.681 [2024-07-22 18:09:28.718011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4a90) with pdu=0x2000190ef6a8 00:32:24.681 [2024-07-22 18:09:28.719209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:24.681 [2024-07-22 18:09:28.719229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:24.681 00:32:24.681 Latency(us) 00:32:24.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.681 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:24.681 nvme0n1 : 2.00 24299.39 94.92 0.00 0.00 5261.42 2432.39 14115.45 00:32:24.681 =================================================================================================================== 00:32:24.681 Total : 24299.39 94.92 0.00 0.00 5261.42 2432.39 14115.45 00:32:24.681 0 00:32:24.681 18:09:28 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:24.681 18:09:28 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:24.681 18:09:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:24.681 18:09:28 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:24.681 | .driver_specific 00:32:24.681 | .nvme_error 00:32:24.681 | .status_code 00:32:24.681 | .command_transient_transport_error' 00:32:24.681 18:09:28 -- host/digest.sh@71 -- # (( 190 > 0 )) 00:32:24.681 18:09:28 -- host/digest.sh@73 -- # killprocess 1876576 00:32:24.681 18:09:28 -- common/autotest_common.sh@926 -- # '[' -z 1876576 ']' 00:32:24.681 18:09:28 -- common/autotest_common.sh@930 -- # kill -0 1876576 00:32:24.681 18:09:28 -- common/autotest_common.sh@931 -- # uname 00:32:24.681 18:09:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.681 18:09:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1876576 00:32:24.941 18:09:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:24.941 18:09:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:24.941 18:09:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1876576' 00:32:24.941 killing process with pid 1876576 00:32:24.941 18:09:29 -- common/autotest_common.sh@945 -- # kill 1876576 00:32:24.941 Received shutdown signal, test time was about 2.000000 seconds 00:32:24.941 00:32:24.941 Latency(us) 00:32:24.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.941 =================================================================================================================== 00:32:24.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:24.941 18:09:29 -- common/autotest_common.sh@950 -- # wait 1876576 00:32:24.941 18:09:29 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:32:24.941 18:09:29 -- host/digest.sh@54 -- # local rw bs qd 00:32:24.941 18:09:29 -- host/digest.sh@56 -- # rw=randwrite 00:32:24.941 18:09:29 -- host/digest.sh@56 -- # bs=131072 00:32:24.941 18:09:29 -- host/digest.sh@56 -- # qd=16 00:32:24.941 18:09:29 -- host/digest.sh@58 -- # bperfpid=1877312 00:32:24.941 18:09:29 -- host/digest.sh@60 -- # waitforlisten 1877312 /var/tmp/bperf.sock 00:32:24.941 18:09:29 -- common/autotest_common.sh@819 -- # '[' -z 1877312 ']' 00:32:24.941 18:09:29 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:32:24.941 18:09:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:24.941 18:09:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:24.941 18:09:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:24.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:24.941 18:09:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:24.941 18:09:29 -- common/autotest_common.sh@10 -- # set +x 00:32:24.941 [2024-07-22 18:09:29.176938] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:24.942 [2024-07-22 18:09:29.177039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1877312 ] 00:32:24.942 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:24.942 Zero copy mechanism will not be used. 00:32:24.942 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.201 [2024-07-22 18:09:29.244939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.201 [2024-07-22 18:09:29.303419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.772 18:09:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:25.772 18:09:29 -- common/autotest_common.sh@852 -- # return 0 00:32:25.772 18:09:29 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:25.772 18:09:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:26.032 18:09:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:26.032 18:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.032 18:09:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.032 18:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.032 18:09:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.032 18:09:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:26.293 nvme0n1 00:32:26.293 18:09:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:32:26.293 18:09:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.293 18:09:30 -- common/autotest_common.sh@10 -- # set +x 00:32:26.293 18:09:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.293 18:09:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:26.293 18:09:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:26.555 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:26.555 Zero copy mechanism will not be used. 00:32:26.555 Running I/O for 2 seconds... 00:32:26.555 [2024-07-22 18:09:30.584060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.584305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.584334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.590075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.590378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.590399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.595789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.595870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.595889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.602253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.602326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.602343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.608588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.608699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.608717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.615285] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.615369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.615387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.622871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.622961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.622978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.631233] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.631484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.631503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.641253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.641445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.641462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.651996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.652062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.555 [2024-07-22 18:09:30.652080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.555 [2024-07-22 18:09:30.662950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.555 [2024-07-22 18:09:30.663059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.663077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.674073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.674186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.674203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.685682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.685778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.685798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.693764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.694026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.694043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.702121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.702297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.702315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.709700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.709943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.709963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.717173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.717345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.717369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.724321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.724394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.724412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.731471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.731548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.731566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.738728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.738945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.738962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.745979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.746246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.746264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.752895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.752982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.753000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.758484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.758601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.758620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.765268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.765368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.765386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.769416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.769580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.769597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.772807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.772938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.772955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.776072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.776133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.776151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.779336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.779426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.779443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.782506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.782607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.782625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.785546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.785617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.785635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.788706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.788810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.788828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.794537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.794807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.794826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.800596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.800901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.800920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.806593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.806945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.806963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.812191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.812308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.812326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.817923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.818154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.818172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.823072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.556 [2024-07-22 18:09:30.823163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.556 [2024-07-22 18:09:30.823180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.556 [2024-07-22 18:09:30.828796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.557 [2024-07-22 18:09:30.828895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.557 [2024-07-22 18:09:30.828913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.834079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.834204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.834224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.837552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.837652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.837669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.841020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.841184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.841202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.844877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.845028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.845045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.849824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.849934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.849952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.855560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.855876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.855895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.860842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.861094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.861111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.866456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.866766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.866784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.871657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.871765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.871782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.879008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.879084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.879102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.883099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.883341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.883372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.889344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.889682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.889706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.893850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.894032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.894051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.899707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.900024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.900042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.904444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.904515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.904533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.908094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.908224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.908241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.911435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.911547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.911565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.914772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.914857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.914874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.918034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.918208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.918226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.921011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.921159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.921176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.925698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.925770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.925787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.929587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.819 [2024-07-22 18:09:30.929794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.819 [2024-07-22 18:09:30.929811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.819 [2024-07-22 18:09:30.933269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.933394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.933412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.939760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.939842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.939860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.945456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.945731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.945749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.949197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.949338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.949360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.952259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.952426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.952450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.955263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.955408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.955425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.958784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.958897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.958915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.962283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.962394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.962411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.965610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.965707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.965724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.968918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.968998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.969016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.972108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.972257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.972274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.976695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.976817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.976835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.980616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.980807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.980824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.984263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.984506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.984524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:30.992596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:30.992849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:30.992867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.001939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.002235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.002253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.011474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.011898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.011918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.021867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.021943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.021961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.032531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.032839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.032858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.043007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.043112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.043129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.054165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.054510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.054528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.064882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.065142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.065160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.075380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.075693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.075711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:26.820 [2024-07-22 18:09:31.086304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:26.820 [2024-07-22 18:09:31.086457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:26.820 [2024-07-22 18:09:31.086475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.082 [2024-07-22 18:09:31.097428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.082 [2024-07-22 18:09:31.097879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.082 [2024-07-22 18:09:31.097898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.082 [2024-07-22 18:09:31.107363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.082 [2024-07-22 18:09:31.107663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.082 [2024-07-22 18:09:31.107682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.082 [2024-07-22 18:09:31.117755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.082 [2024-07-22 18:09:31.117964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.082 [2024-07-22 18:09:31.117982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.082 [2024-07-22 18:09:31.127867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.082 [2024-07-22 18:09:31.128010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.082 [2024-07-22 18:09:31.128027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.138444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.138536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.149483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.149875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.149895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.160435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.160728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.160752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.170692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.171029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.171047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.181038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.181470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.181488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.189621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.189733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.189751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.196103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.196380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.196398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.204087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.204337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.204359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.211478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.211563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.211581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.214868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.214933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.214951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.218320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.218422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.218439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.221371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.221448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.221465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.224827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.224934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.224951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.229817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.229909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.229926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.236078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.236154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.236172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.242926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.243024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.243041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.249135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.249225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.249242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.257318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.257417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.257434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.265280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.265363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.265381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.272459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.272737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.272756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.280416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.280680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.280698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.288469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.288539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.288556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.296941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.297008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.297025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.304034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.304120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.304138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.310247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.310560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.310579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.314905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.315019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.315036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.083 [2024-07-22 18:09:31.318557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.083 [2024-07-22 18:09:31.318635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.083 [2024-07-22 18:09:31.318653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.321662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.321802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.321820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.324698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.324812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.324832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.327708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.327783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.327800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.330919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.331060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.331077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.335152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.335459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.335477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.344949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.345105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.345123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.084 [2024-07-22 18:09:31.355340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.084 [2024-07-22 18:09:31.355673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.084 [2024-07-22 18:09:31.355692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.366436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.366576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.366593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.377310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.377497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.377514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.388126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.388212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.388230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.398708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.398941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.398959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.409294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.409621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.409639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.420105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.420479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.431912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.432266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.432285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.442369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.442646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.442664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.452872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.452951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.452968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.463416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.463706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.463724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.471680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.471779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.471796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.475181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.475277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.475295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.480363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.480464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.480481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.487107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.487327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.487344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.493906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.494036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.494054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.501742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.501861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.501879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.505669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.505750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.505768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.508806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.508944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.508962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.512646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.512792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.512810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.515821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.515891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.515908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.518876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.519012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.519033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.346 [2024-07-22 18:09:31.521867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.346 [2024-07-22 18:09:31.521961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.346 [2024-07-22 18:09:31.521978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.524910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.525024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.525042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.528150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.528294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.528311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.532641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.532795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.532812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.541631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.541850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.541868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.550288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.550559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.550579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.560399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.560498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.560515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.570187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.570354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.570371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.581320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.581399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.581416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.588915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.589027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.589044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.592200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.592339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.592361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.595298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.595384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.595401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.598463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.598604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.598622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.601485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.601604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.601621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.604471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.604547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.604564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.607464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.607602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.607619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.610435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.610552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.610569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.613418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.613515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.613532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.616582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.616724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.616742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.347 [2024-07-22 18:09:31.620493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.347 [2024-07-22 18:09:31.620748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.347 [2024-07-22 18:09:31.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.608 [2024-07-22 18:09:31.624436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.608 [2024-07-22 18:09:31.624569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.608 [2024-07-22 18:09:31.624586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.608 [2024-07-22 18:09:31.627430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.608 [2024-07-22 18:09:31.627525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.608 [2024-07-22 18:09:31.627542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.608 [2024-07-22 18:09:31.630457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.608 [2024-07-22 18:09:31.630528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.608 [2024-07-22 18:09:31.630545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.608 [2024-07-22 18:09:31.633437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.608 [2024-07-22 18:09:31.633566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.633583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.636382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.636458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.636474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.639342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.639455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.639475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.642344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.642480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.642497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.645268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.645373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.645391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.648287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.648430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.648447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.651258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.651367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.651385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.654332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.654444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.654461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.659190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.659455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.659472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.668750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.669009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.669026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.677356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.677659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.677677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.687294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.687391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.697374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.697490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.697508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.708699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.708985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.709004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.718539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.718679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.718696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.728797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.728899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.728917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.735345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.735448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.735464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.738485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.738564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.738581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.741535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.741621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.741638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.744805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.744931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.744948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.747792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.747872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.747890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.750844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.750981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.750999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.753883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.753991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.754008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.757040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.757111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.757128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.761371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.761666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.761684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.768832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.768900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.768917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.776966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.777109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.609 [2024-07-22 18:09:31.784364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.609 [2024-07-22 18:09:31.784613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.609 [2024-07-22 18:09:31.784631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.791047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.791116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.791136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.798833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.798953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.798970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.805848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.806076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.813395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.813666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.813684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.821257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.821456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.821474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.828200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.828282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.828299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.835547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.835706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.835724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.843250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.843407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.843426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.849689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.849945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.849962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.859582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.859729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.859746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.865708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.866115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.866133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.871228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.871319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.871337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.875454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.875624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.875641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.878895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.878970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.878987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.610 [2024-07-22 18:09:31.881964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.610 [2024-07-22 18:09:31.882062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.610 [2024-07-22 18:09:31.882080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.885076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.871 [2024-07-22 18:09:31.885215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.871 [2024-07-22 18:09:31.885232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.888152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.871 [2024-07-22 18:09:31.888245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.871 [2024-07-22 18:09:31.888262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.891310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.871 [2024-07-22 18:09:31.891448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.871 [2024-07-22 18:09:31.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.894326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.871 [2024-07-22 18:09:31.894440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.871 [2024-07-22 18:09:31.894458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.898421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.871 [2024-07-22 18:09:31.898485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.871 [2024-07-22 18:09:31.898502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.905059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.871 [2024-07-22 18:09:31.905296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.871 [2024-07-22 18:09:31.905314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.871 [2024-07-22 18:09:31.911488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.911687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.911704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.917947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.918063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.918080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.925412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.925490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.925507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.928705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.928818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.928836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.931877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.932031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.932048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.935126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.935206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.935226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.939444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.939561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.939578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.945357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.945601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.945618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.951666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.951904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.951921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.960309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.960513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.960531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.971182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.971452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.971470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.981394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.981465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.981482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:31.992450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:31.992875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:31.992895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.002768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.003027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.003044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.012627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.012708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.012725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.022568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.022642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.022659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.030910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.031039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.031056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.038306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.038391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.038409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.045770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.045838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.045855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.053387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.053474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.061444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.061527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.061544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.069319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.069483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.069502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.077094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.077320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.077338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.085191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.085285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.085303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.094962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.095251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.095269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.105146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.105224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.105241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.116713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.116807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.872 [2024-07-22 18:09:32.116824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:27.872 [2024-07-22 18:09:32.127046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.872 [2024-07-22 18:09:32.127342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.873 [2024-07-22 18:09:32.127366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:27.873 [2024-07-22 18:09:32.138049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:27.873 [2024-07-22 18:09:32.138119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:27.873 [2024-07-22 18:09:32.138139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.147771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.148028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.148047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.159306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.159405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.159424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.170609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.171068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.171091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.181536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.181856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.181875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.192437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.192677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.192694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.203887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.203969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.203987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.215049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.215354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.215372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.225668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.225814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.225831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.236112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.236425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.236444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.247070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.247459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.247478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.257722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.258035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.258054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.268500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.268789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.268808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.279589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.279829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.279846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.290456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.290717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.290742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.301417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.301520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.301537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.311777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.312088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.321995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.322287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.330756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.330886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.330904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.340614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.340715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.340733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.349993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.350267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.350285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.360847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.360927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.360944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.371651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.371939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.371958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.383481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.134 [2024-07-22 18:09:32.383764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.134 [2024-07-22 18:09:32.383782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.134 [2024-07-22 18:09:32.394403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.135 [2024-07-22 18:09:32.394543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.135 [2024-07-22 18:09:32.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.135 [2024-07-22 18:09:32.403697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.135 [2024-07-22 18:09:32.403948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.135 [2024-07-22 18:09:32.403967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.412945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.413004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.413023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.422921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.423053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.423071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.432491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.432606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.432624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.443311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.443622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.443644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.453085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.453194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.453211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.461603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.461929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.461947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.470565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.470834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.470851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.480246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.480388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.480406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.488988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.489187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.489204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.492676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.492807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.492824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.496255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.496375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.496393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.500031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.500205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.500222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.505499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.505805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.505824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.515215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.515494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.515512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.525521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.525753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.525771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.536624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.396 [2024-07-22 18:09:32.536904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.396 [2024-07-22 18:09:32.536924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.396 [2024-07-22 18:09:32.546543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.397 [2024-07-22 18:09:32.546620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.397 [2024-07-22 18:09:32.546638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:28.397 [2024-07-22 18:09:32.556961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.397 [2024-07-22 18:09:32.557064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.397 [2024-07-22 18:09:32.557082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:28.397 [2024-07-22 18:09:32.566615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.397 [2024-07-22 18:09:32.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.397 [2024-07-22 18:09:32.566713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:28.397 [2024-07-22 18:09:32.576145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19e4d60) with pdu=0x2000190fef90 00:32:28.397 [2024-07-22 18:09:32.576252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:28.397 [2024-07-22 18:09:32.576269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:28.397 00:32:28.397 Latency(us) 00:32:28.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.397 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:28.397 nvme0n1 : 2.00 4483.83 560.48 0.00 0.00 3561.43 1373.74 11695.66 00:32:28.397 =================================================================================================================== 00:32:28.397 Total : 4483.83 560.48 0.00 0.00 3561.43 1373.74 11695.66 00:32:28.397 0 00:32:28.397 18:09:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:28.397 18:09:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:28.397 18:09:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:28.397 | .driver_specific 00:32:28.397 | .nvme_error 00:32:28.397 | .status_code 00:32:28.397 | .command_transient_transport_error' 00:32:28.397 18:09:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:28.657 18:09:32 -- host/digest.sh@71 -- # (( 289 > 0 )) 00:32:28.657 18:09:32 -- host/digest.sh@73 -- # killprocess 1877312 00:32:28.657 18:09:32 -- common/autotest_common.sh@926 -- # '[' -z 1877312 ']' 00:32:28.657 18:09:32 -- common/autotest_common.sh@930 -- # kill -0 1877312 00:32:28.657 18:09:32 -- common/autotest_common.sh@931 -- # uname 00:32:28.657 18:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:28.657 18:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1877312 00:32:28.657 18:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:28.657 18:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:28.657 18:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1877312' 00:32:28.657 killing process with pid 1877312 00:32:28.657 18:09:32 -- common/autotest_common.sh@945 -- # kill 1877312 00:32:28.657 Received shutdown signal, test time was about 2.000000 seconds 00:32:28.657 00:32:28.657 Latency(us) 00:32:28.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.657 =================================================================================================================== 00:32:28.657 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.657 18:09:32 -- common/autotest_common.sh@950 -- # wait 1877312 00:32:28.918 18:09:32 -- host/digest.sh@115 -- # killprocess 1875099 00:32:28.918 18:09:32 -- common/autotest_common.sh@926 -- # '[' -z 1875099 ']' 00:32:28.918 18:09:32 -- common/autotest_common.sh@930 -- # kill -0 1875099 00:32:28.918 18:09:32 -- common/autotest_common.sh@931 -- # uname 00:32:28.918 18:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:28.918 18:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1875099 00:32:28.918 18:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:28.918 18:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:28.918 18:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1875099' 00:32:28.918 killing process with pid 1875099 00:32:28.918 18:09:33 -- common/autotest_common.sh@945 -- # kill 1875099 00:32:28.918 18:09:33 -- common/autotest_common.sh@950 -- # wait 1875099 00:32:28.918 00:32:28.918 real 0m16.889s 00:32:28.918 user 0m33.714s 00:32:28.918 sys 0m3.561s 00:32:28.918 18:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:28.918 18:09:33 -- common/autotest_common.sh@10 -- # set +x 00:32:28.918 ************************************ 00:32:28.918 END TEST nvmf_digest_error 00:32:28.918 ************************************ 00:32:28.918 18:09:33 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:32:28.918 18:09:33 -- host/digest.sh@139 -- # nvmftestfini 00:32:28.918 18:09:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:28.918 18:09:33 -- nvmf/common.sh@116 -- # sync 00:32:28.918 18:09:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:29.177 18:09:33 -- nvmf/common.sh@119 -- # set +e 00:32:29.177 18:09:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:29.177 18:09:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:29.177 rmmod nvme_tcp 00:32:29.177 rmmod nvme_fabrics 00:32:29.177 rmmod nvme_keyring 00:32:29.177 18:09:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:29.177 18:09:33 -- nvmf/common.sh@123 -- # set -e 00:32:29.177 18:09:33 -- nvmf/common.sh@124 -- # return 0 00:32:29.177 18:09:33 -- nvmf/common.sh@477 -- # '[' -n 1875099 ']' 00:32:29.177 18:09:33 -- nvmf/common.sh@478 -- # killprocess 1875099 00:32:29.177 18:09:33 -- common/autotest_common.sh@926 -- # '[' -z 1875099 ']' 00:32:29.177 18:09:33 -- common/autotest_common.sh@930 -- # kill -0 1875099 00:32:29.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1875099) - No such process 00:32:29.177 18:09:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1875099 is not found' 00:32:29.178 Process with pid 1875099 is not found 00:32:29.178 18:09:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:29.178 18:09:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:29.178 18:09:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:29.178 18:09:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:29.178 18:09:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:29.178 18:09:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.178 18:09:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:29.178 18:09:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.089 18:09:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:31.089 00:32:31.089 real 0m42.792s 00:32:31.089 user 1m5.840s 00:32:31.089 sys 0m13.468s 00:32:31.089 18:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:31.089 18:09:35 -- common/autotest_common.sh@10 -- # set +x 00:32:31.089 ************************************ 00:32:31.089 END TEST nvmf_digest 00:32:31.089 ************************************ 00:32:31.089 18:09:35 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:32:31.089 18:09:35 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:32:31.089 18:09:35 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:32:31.089 18:09:35 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:31.089 18:09:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:31.089 18:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:31.089 18:09:35 -- common/autotest_common.sh@10 -- # set +x 00:32:31.351 ************************************ 00:32:31.351 START TEST nvmf_bdevperf 00:32:31.351 ************************************ 00:32:31.351 18:09:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:32:31.351 * Looking for test storage... 00:32:31.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:31.351 18:09:35 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.351 18:09:35 -- nvmf/common.sh@7 -- # uname -s 00:32:31.351 18:09:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.351 18:09:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.351 18:09:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.351 18:09:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.351 18:09:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.351 18:09:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.351 18:09:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.351 18:09:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.351 18:09:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.351 18:09:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.351 18:09:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:31.351 18:09:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:32:31.351 18:09:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.351 18:09:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.351 18:09:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.351 18:09:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.351 18:09:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.351 18:09:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.351 18:09:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.351 18:09:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.351 18:09:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.351 18:09:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.351 18:09:35 -- paths/export.sh@5 -- # export PATH 00:32:31.351 18:09:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.351 18:09:35 -- nvmf/common.sh@46 -- # : 0 00:32:31.351 18:09:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:31.351 18:09:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:31.351 18:09:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:31.351 18:09:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.351 18:09:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.351 18:09:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:31.351 18:09:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:31.351 18:09:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:31.351 18:09:35 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:31.351 18:09:35 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:31.351 18:09:35 -- host/bdevperf.sh@24 -- # nvmftestinit 00:32:31.351 18:09:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:31.351 18:09:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.351 18:09:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:31.351 18:09:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:31.351 18:09:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:31.351 18:09:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.351 18:09:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.351 18:09:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.351 18:09:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:31.351 18:09:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:31.351 18:09:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:31.351 18:09:35 -- common/autotest_common.sh@10 -- # set +x 00:32:39.594 18:09:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:39.594 18:09:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:39.594 18:09:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:39.594 18:09:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:39.594 18:09:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:39.594 18:09:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:39.594 18:09:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:39.594 18:09:43 -- nvmf/common.sh@294 -- # net_devs=() 00:32:39.594 18:09:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:39.594 18:09:43 -- nvmf/common.sh@295 -- # e810=() 00:32:39.594 18:09:43 -- nvmf/common.sh@295 -- # local -ga e810 00:32:39.594 18:09:43 -- nvmf/common.sh@296 -- # x722=() 00:32:39.594 18:09:43 -- nvmf/common.sh@296 -- # local -ga x722 00:32:39.594 18:09:43 -- nvmf/common.sh@297 -- # mlx=() 00:32:39.594 18:09:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:39.594 18:09:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.594 18:09:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:39.594 18:09:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:39.594 18:09:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:39.594 18:09:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:39.594 18:09:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:39.594 18:09:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:39.594 18:09:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:39.594 18:09:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:39.594 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:39.594 18:09:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:39.594 18:09:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:39.595 18:09:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:39.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:39.595 18:09:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:39.595 18:09:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:39.595 18:09:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.595 18:09:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:39.595 18:09:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.595 18:09:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:39.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:39.595 18:09:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.595 18:09:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:39.595 18:09:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.595 18:09:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:39.595 18:09:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.595 18:09:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:39.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:39.595 18:09:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.595 18:09:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:39.595 18:09:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:39.595 18:09:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:39.595 18:09:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.595 18:09:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.595 18:09:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.595 18:09:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:39.595 18:09:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.595 18:09:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.595 18:09:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:39.595 18:09:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.595 18:09:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.595 18:09:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:39.595 18:09:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:39.595 18:09:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.595 18:09:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.595 18:09:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.595 18:09:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.595 18:09:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:39.595 18:09:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.595 18:09:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.595 18:09:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.595 18:09:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:39.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:32:39.595 00:32:39.595 --- 10.0.0.2 ping statistics --- 00:32:39.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.595 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:32:39.595 18:09:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:32:39.595 00:32:39.595 --- 10.0.0.1 ping statistics --- 00:32:39.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.595 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:32:39.595 18:09:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.595 18:09:43 -- nvmf/common.sh@410 -- # return 0 00:32:39.595 18:09:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:32:39.595 18:09:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.595 18:09:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:39.595 18:09:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.595 18:09:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:39.595 18:09:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:39.595 18:09:43 -- host/bdevperf.sh@25 -- # tgt_init 00:32:39.595 18:09:43 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:39.595 18:09:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:39.595 18:09:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:39.595 18:09:43 -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 18:09:43 -- nvmf/common.sh@469 -- # nvmfpid=1882188 00:32:39.595 18:09:43 -- nvmf/common.sh@470 -- # waitforlisten 1882188 00:32:39.595 18:09:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:39.595 18:09:43 -- common/autotest_common.sh@819 -- # '[' -z 1882188 ']' 00:32:39.595 18:09:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.595 18:09:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:39.595 18:09:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.595 18:09:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:39.595 18:09:43 -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 [2024-07-22 18:09:43.551205] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:39.595 [2024-07-22 18:09:43.551270] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.595 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.595 [2024-07-22 18:09:43.624617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:39.595 [2024-07-22 18:09:43.694894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:39.595 [2024-07-22 18:09:43.695012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.595 [2024-07-22 18:09:43.695019] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.595 [2024-07-22 18:09:43.695026] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.595 [2024-07-22 18:09:43.695140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:39.595 [2024-07-22 18:09:43.695256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:39.595 [2024-07-22 18:09:43.695258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:40.167 18:09:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:40.167 18:09:44 -- common/autotest_common.sh@852 -- # return 0 00:32:40.167 18:09:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:40.167 18:09:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:40.167 18:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.167 18:09:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.167 18:09:44 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:40.167 18:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.167 18:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.167 [2024-07-22 18:09:44.426332] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.167 18:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.167 18:09:44 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:40.167 18:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.167 18:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.426 Malloc0 00:32:40.426 18:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.426 18:09:44 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:40.426 18:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.426 18:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.426 18:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.426 18:09:44 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:40.426 18:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.426 18:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.426 18:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.426 18:09:44 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.426 18:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:40.426 18:09:44 -- common/autotest_common.sh@10 -- # set +x 00:32:40.426 [2024-07-22 18:09:44.494236] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.426 18:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:40.426 18:09:44 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:32:40.426 18:09:44 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:32:40.426 18:09:44 -- nvmf/common.sh@520 -- # config=() 00:32:40.426 18:09:44 -- nvmf/common.sh@520 -- # local subsystem config 00:32:40.426 18:09:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:40.426 18:09:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:40.426 { 00:32:40.426 "params": { 00:32:40.426 "name": "Nvme$subsystem", 00:32:40.426 "trtype": "$TEST_TRANSPORT", 00:32:40.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:40.426 "adrfam": "ipv4", 00:32:40.426 "trsvcid": "$NVMF_PORT", 00:32:40.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:40.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:40.426 "hdgst": ${hdgst:-false}, 00:32:40.426 "ddgst": ${ddgst:-false} 00:32:40.426 }, 00:32:40.426 "method": "bdev_nvme_attach_controller" 00:32:40.426 } 00:32:40.426 EOF 00:32:40.426 )") 00:32:40.426 18:09:44 -- nvmf/common.sh@542 -- # cat 00:32:40.426 18:09:44 -- nvmf/common.sh@544 -- # jq . 00:32:40.426 18:09:44 -- nvmf/common.sh@545 -- # IFS=, 00:32:40.426 18:09:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:40.426 "params": { 00:32:40.426 "name": "Nvme1", 00:32:40.426 "trtype": "tcp", 00:32:40.426 "traddr": "10.0.0.2", 00:32:40.426 "adrfam": "ipv4", 00:32:40.426 "trsvcid": "4420", 00:32:40.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:40.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:40.426 "hdgst": false, 00:32:40.426 "ddgst": false 00:32:40.426 }, 00:32:40.426 "method": "bdev_nvme_attach_controller" 00:32:40.426 }' 00:32:40.426 [2024-07-22 18:09:44.543433] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:40.426 [2024-07-22 18:09:44.543482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882495 ] 00:32:40.426 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.426 [2024-07-22 18:09:44.622684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.426 [2024-07-22 18:09:44.682099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.997 Running I/O for 1 seconds... 00:32:41.938 00:32:41.938 Latency(us) 00:32:41.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.938 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:41.938 Verification LBA range: start 0x0 length 0x4000 00:32:41.938 Nvme1n1 : 1.01 14658.41 57.26 0.00 0.00 8691.41 1203.59 14518.74 00:32:41.938 =================================================================================================================== 00:32:41.938 Total : 14658.41 57.26 0.00 0.00 8691.41 1203.59 14518.74 00:32:41.938 18:09:46 -- host/bdevperf.sh@30 -- # bdevperfpid=1882796 00:32:41.938 18:09:46 -- host/bdevperf.sh@32 -- # sleep 3 00:32:41.938 18:09:46 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:32:41.938 18:09:46 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:32:41.938 18:09:46 -- nvmf/common.sh@520 -- # config=() 00:32:41.938 18:09:46 -- nvmf/common.sh@520 -- # local subsystem config 00:32:41.938 18:09:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:41.938 18:09:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:41.938 { 00:32:41.938 "params": { 00:32:41.938 "name": "Nvme$subsystem", 00:32:41.938 "trtype": "$TEST_TRANSPORT", 00:32:41.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:41.938 "adrfam": "ipv4", 00:32:41.938 "trsvcid": "$NVMF_PORT", 00:32:41.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:41.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:41.938 "hdgst": ${hdgst:-false}, 00:32:41.938 "ddgst": ${ddgst:-false} 00:32:41.938 }, 00:32:41.939 "method": "bdev_nvme_attach_controller" 00:32:41.939 } 00:32:41.939 EOF 00:32:41.939 )") 00:32:41.939 18:09:46 -- nvmf/common.sh@542 -- # cat 00:32:41.939 18:09:46 -- nvmf/common.sh@544 -- # jq . 00:32:41.939 18:09:46 -- nvmf/common.sh@545 -- # IFS=, 00:32:41.939 18:09:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:41.939 "params": { 00:32:41.939 "name": "Nvme1", 00:32:41.939 "trtype": "tcp", 00:32:41.939 "traddr": "10.0.0.2", 00:32:41.939 "adrfam": "ipv4", 00:32:41.939 "trsvcid": "4420", 00:32:41.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:41.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:41.939 "hdgst": false, 00:32:41.939 "ddgst": false 00:32:41.939 }, 00:32:41.939 "method": "bdev_nvme_attach_controller" 00:32:41.939 }' 00:32:41.939 [2024-07-22 18:09:46.155626] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:41.939 [2024-07-22 18:09:46.155679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882796 ] 00:32:41.939 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.200 [2024-07-22 18:09:46.236204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.200 [2024-07-22 18:09:46.296377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.460 Running I/O for 15 seconds... 00:32:45.007 18:09:49 -- host/bdevperf.sh@33 -- # kill -9 1882188 00:32:45.007 18:09:49 -- host/bdevperf.sh@35 -- # sleep 3 00:32:45.007 [2024-07-22 18:09:49.125101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.007 [2024-07-22 18:09:49.125456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.007 [2024-07-22 18:09:49.125622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.007 [2024-07-22 18:09:49.125687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.007 [2024-07-22 18:09:49.125703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.007 [2024-07-22 18:09:49.125712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.007 [2024-07-22 18:09:49.125719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.125751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.125918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.125988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.125995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.008 [2024-07-22 18:09:49.126289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.008 [2024-07-22 18:09:49.126304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.008 [2024-07-22 18:09:49.126313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.126841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.126989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.009 [2024-07-22 18:09:49.126997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.009 [2024-07-22 18:09:49.127006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.009 [2024-07-22 18:09:49.127012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:45.010 [2024-07-22 18:09:49.127168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:45.010 [2024-07-22 18:09:49.127275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1194720 is same with the state(5) to be set 00:32:45.010 [2024-07-22 18:09:49.127292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:45.010 [2024-07-22 18:09:49.127297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:45.010 [2024-07-22 18:09:49.127304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42352 len:8 PRP1 0x0 PRP2 0x0 00:32:45.010 [2024-07-22 18:09:49.127311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:45.010 [2024-07-22 18:09:49.127354] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1194720 was disconnected and freed. reset controller. 00:32:45.010 [2024-07-22 18:09:49.129585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.010 [2024-07-22 18:09:49.129633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.010 [2024-07-22 18:09:49.130240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.130569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.130605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.010 [2024-07-22 18:09:49.130615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.010 [2024-07-22 18:09:49.130718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.010 [2024-07-22 18:09:49.130871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.010 [2024-07-22 18:09:49.130880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.010 [2024-07-22 18:09:49.130888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.010 [2024-07-22 18:09:49.133149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.010 [2024-07-22 18:09:49.142183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.010 [2024-07-22 18:09:49.142638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.142973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.142986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.010 [2024-07-22 18:09:49.142995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.010 [2024-07-22 18:09:49.143164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.010 [2024-07-22 18:09:49.143283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.010 [2024-07-22 18:09:49.143292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.010 [2024-07-22 18:09:49.143300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.010 [2024-07-22 18:09:49.145630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.010 [2024-07-22 18:09:49.154444] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.010 [2024-07-22 18:09:49.154981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.155356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.155370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.010 [2024-07-22 18:09:49.155380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.010 [2024-07-22 18:09:49.155515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.010 [2024-07-22 18:09:49.155650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.010 [2024-07-22 18:09:49.155659] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.010 [2024-07-22 18:09:49.155666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.010 [2024-07-22 18:09:49.157689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.010 [2024-07-22 18:09:49.166635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.010 [2024-07-22 18:09:49.167076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.167533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.167569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.010 [2024-07-22 18:09:49.167579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.010 [2024-07-22 18:09:49.167696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.010 [2024-07-22 18:09:49.167832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.010 [2024-07-22 18:09:49.167841] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.010 [2024-07-22 18:09:49.167848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.010 [2024-07-22 18:09:49.170012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.010 [2024-07-22 18:09:49.178876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.010 [2024-07-22 18:09:49.179444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.179812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.010 [2024-07-22 18:09:49.179825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.010 [2024-07-22 18:09:49.179834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.010 [2024-07-22 18:09:49.179951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.180087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.180096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.180103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.182113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.191161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.191751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.192115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.192128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.192137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.192288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.192449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.192458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.192466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.194521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.203535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.204069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.204173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.204184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.204193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.204387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.204507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.204516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.204524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.206674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.215890] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.216447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.216787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.216801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.216810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.216944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.217312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.217326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.217334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.219539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.228203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.228569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.228890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.228900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.228908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.229041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.229141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.229148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.229155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.231427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.240526] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.241084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.241450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.241469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.241478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.241629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.241799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.241808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.241815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.243903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.252932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.253448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.253763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.253776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.253785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.253920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.254056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.254064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.254071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.011 [2024-07-22 18:09:49.256266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.011 [2024-07-22 18:09:49.265314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.011 [2024-07-22 18:09:49.265889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.266214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.011 [2024-07-22 18:09:49.266228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.011 [2024-07-22 18:09:49.266238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.011 [2024-07-22 18:09:49.266435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.011 [2024-07-22 18:09:49.266624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.011 [2024-07-22 18:09:49.266633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.011 [2024-07-22 18:09:49.266640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.012 [2024-07-22 18:09:49.268790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.012 [2024-07-22 18:09:49.277703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.012 [2024-07-22 18:09:49.278197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.012 [2024-07-22 18:09:49.278528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.012 [2024-07-22 18:09:49.278543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.012 [2024-07-22 18:09:49.278556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.012 [2024-07-22 18:09:49.278724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.012 [2024-07-22 18:09:49.278809] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.012 [2024-07-22 18:09:49.278817] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.012 [2024-07-22 18:09:49.278824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.274 [2024-07-22 18:09:49.280927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.274 [2024-07-22 18:09:49.290067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.274 [2024-07-22 18:09:49.290462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.274 [2024-07-22 18:09:49.290788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.274 [2024-07-22 18:09:49.290798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.274 [2024-07-22 18:09:49.290806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.274 [2024-07-22 18:09:49.290938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.274 [2024-07-22 18:09:49.291037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.274 [2024-07-22 18:09:49.291044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.274 [2024-07-22 18:09:49.291051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.274 [2024-07-22 18:09:49.293063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.274 [2024-07-22 18:09:49.302223] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.274 [2024-07-22 18:09:49.302769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.274 [2024-07-22 18:09:49.303103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.274 [2024-07-22 18:09:49.303116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.274 [2024-07-22 18:09:49.303125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.274 [2024-07-22 18:09:49.303293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.274 [2024-07-22 18:09:49.303456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.274 [2024-07-22 18:09:49.303465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.274 [2024-07-22 18:09:49.303472] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.274 [2024-07-22 18:09:49.305710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.274 [2024-07-22 18:09:49.314607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.274 [2024-07-22 18:09:49.315187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.274 [2024-07-22 18:09:49.315544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.274 [2024-07-22 18:09:49.315559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.274 [2024-07-22 18:09:49.315568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.274 [2024-07-22 18:09:49.315706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.274 [2024-07-22 18:09:49.315843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.274 [2024-07-22 18:09:49.315851] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.315858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.317979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.326883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.327452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.327679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.327692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.327701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.327852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.328006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.328015] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.328022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.330166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.339168] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.339762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.340084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.340098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.340107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.340258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.340418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.340428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.340436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.342505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.351484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.351852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.352087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.352098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.352105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.352238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.352363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.352372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.352379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.354416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.363879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.364434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.364770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.364783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.364792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.364943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.365078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.365087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.365094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.367235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.376180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.376774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.377104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.377119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.377128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.377313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.377420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.377428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.377436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.379538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.388627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.389182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.389531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.389545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.389554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.389739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.389909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.389921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.389929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.392062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.400964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.401411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.401732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.401742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.401749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.401848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.275 [2024-07-22 18:09:49.401997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.275 [2024-07-22 18:09:49.402004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.275 [2024-07-22 18:09:49.402011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.275 [2024-07-22 18:09:49.404157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.275 [2024-07-22 18:09:49.413219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.275 [2024-07-22 18:09:49.413685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.414018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.275 [2024-07-22 18:09:49.414027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.275 [2024-07-22 18:09:49.414034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.275 [2024-07-22 18:09:49.414166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.414264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.414272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.414279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.416450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.425623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.426068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.426354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.426364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.426372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.426488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.426587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.426595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.426605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.428673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.437948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.438408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.438751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.438760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.438766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.438915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.439030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.439038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.439044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.441212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.450225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.450670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.450942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.450951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.450958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.451040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.451173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.451180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.451186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.453255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.462749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.463202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.463276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.463287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.463295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.463468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.463551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.463558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.463565] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.465551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.475080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.476227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.476583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.476595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.476604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.476765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.476866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.476874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.476881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.479061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.487411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.487895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.488188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.488197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.488204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.488303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.488475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.488483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.488490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.490719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.499685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.500161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.501235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.501259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.276 [2024-07-22 18:09:49.501268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.276 [2024-07-22 18:09:49.501386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.276 [2024-07-22 18:09:49.501504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.276 [2024-07-22 18:09:49.501512] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.276 [2024-07-22 18:09:49.501519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.276 [2024-07-22 18:09:49.503511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.276 [2024-07-22 18:09:49.511889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.276 [2024-07-22 18:09:49.512390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.276 [2024-07-22 18:09:49.512745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.277 [2024-07-22 18:09:49.512755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.277 [2024-07-22 18:09:49.512763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.277 [2024-07-22 18:09:49.512897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.277 [2024-07-22 18:09:49.513064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.277 [2024-07-22 18:09:49.513073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.277 [2024-07-22 18:09:49.513080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.277 [2024-07-22 18:09:49.515150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.277 [2024-07-22 18:09:49.524074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.277 [2024-07-22 18:09:49.524542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.277 [2024-07-22 18:09:49.524894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.277 [2024-07-22 18:09:49.524903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.277 [2024-07-22 18:09:49.524910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.277 [2024-07-22 18:09:49.525043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.277 [2024-07-22 18:09:49.525159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.277 [2024-07-22 18:09:49.525166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.277 [2024-07-22 18:09:49.525173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.277 [2024-07-22 18:09:49.527178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.277 [2024-07-22 18:09:49.536414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.277 [2024-07-22 18:09:49.536850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.277 [2024-07-22 18:09:49.537200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.277 [2024-07-22 18:09:49.537211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.277 [2024-07-22 18:09:49.537218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.277 [2024-07-22 18:09:49.537317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.277 [2024-07-22 18:09:49.537489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.277 [2024-07-22 18:09:49.537498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.277 [2024-07-22 18:09:49.537505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.277 [2024-07-22 18:09:49.539664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.540 [2024-07-22 18:09:49.548887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.540 [2024-07-22 18:09:49.549354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.540 [2024-07-22 18:09:49.549716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.540 [2024-07-22 18:09:49.549726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.540 [2024-07-22 18:09:49.549733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.540 [2024-07-22 18:09:49.549885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.540 [2024-07-22 18:09:49.550034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.540 [2024-07-22 18:09:49.550041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.540 [2024-07-22 18:09:49.550048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.540 [2024-07-22 18:09:49.552150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.540 [2024-07-22 18:09:49.561186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.540 [2024-07-22 18:09:49.561718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.540 [2024-07-22 18:09:49.562111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.540 [2024-07-22 18:09:49.562126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.540 [2024-07-22 18:09:49.562136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.540 [2024-07-22 18:09:49.562301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.540 [2024-07-22 18:09:49.562483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.540 [2024-07-22 18:09:49.562493] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.540 [2024-07-22 18:09:49.562500] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.540 [2024-07-22 18:09:49.564635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.540 [2024-07-22 18:09:49.573538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.540 [2024-07-22 18:09:49.574089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.540 [2024-07-22 18:09:49.574402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.540 [2024-07-22 18:09:49.574414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.540 [2024-07-22 18:09:49.574422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.540 [2024-07-22 18:09:49.574593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.540 [2024-07-22 18:09:49.574727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.540 [2024-07-22 18:09:49.574735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.574742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.576750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.585986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.586330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.586648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.586665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.586674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.586761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.586932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.586941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.586948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.589053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.598159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.598664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.599054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.599064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.599071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.599206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.599289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.599298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.599305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.601558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.610478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.610995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.611224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.611234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.611241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.611380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.611499] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.611509] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.611516] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.613505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.623024] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.623531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.623880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.623889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.623902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.624055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.624205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.624212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.624219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.626241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.635194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.635734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.635958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.635968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.635976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.636111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.636263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.636272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.636279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.638258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.647521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.648108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.648504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.648519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.648529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.648699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.648839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.648848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.648855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.651067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.659778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.660213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.660576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.660587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.660595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.660738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.660873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.660880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.660887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.663050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.672084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.672567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.672918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.672928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.672935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.673071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.673190] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.673198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.673205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.675437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.541 [2024-07-22 18:09:49.684475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.541 [2024-07-22 18:09:49.684983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.685335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.541 [2024-07-22 18:09:49.685345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.541 [2024-07-22 18:09:49.685358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.541 [2024-07-22 18:09:49.685530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.541 [2024-07-22 18:09:49.685679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.541 [2024-07-22 18:09:49.685688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.541 [2024-07-22 18:09:49.685696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.541 [2024-07-22 18:09:49.687764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.696804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.697299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.697592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.697604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.697612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.697746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.697889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.697897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.697904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.700203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.709088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.710556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.710960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.710975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.710984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.711142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.711296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.711305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.711312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.713560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.721244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.721714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.722066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.722078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.722086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.722205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.722341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.722356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.722364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.724609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.733547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.734074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.734480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.734496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.734507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.734692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.734867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.734881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.734889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.737137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.745893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.746380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.746598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.746610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.746618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.746738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.746872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.746880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.746887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.749083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.758042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.758679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.758987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.759001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.759012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.759181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.759417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.759427] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.759434] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.761521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.770370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.770876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.771234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.771246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.771254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.771397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.771482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.771490] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.771503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.773658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.782777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.783395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.783817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.783832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.783843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.784011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.784183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.784193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.784200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.786257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.795207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.795737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.796086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.796096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.542 [2024-07-22 18:09:49.796104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.542 [2024-07-22 18:09:49.796272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.542 [2024-07-22 18:09:49.796414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.542 [2024-07-22 18:09:49.796423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.542 [2024-07-22 18:09:49.796430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.542 [2024-07-22 18:09:49.798502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.542 [2024-07-22 18:09:49.807595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.542 [2024-07-22 18:09:49.808119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.542 [2024-07-22 18:09:49.808451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.543 [2024-07-22 18:09:49.808462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.543 [2024-07-22 18:09:49.808469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.543 [2024-07-22 18:09:49.808570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.543 [2024-07-22 18:09:49.808722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.543 [2024-07-22 18:09:49.808730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.543 [2024-07-22 18:09:49.808737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.543 [2024-07-22 18:09:49.810821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.806 [2024-07-22 18:09:49.819735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.806 [2024-07-22 18:09:49.820237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-07-22 18:09:49.820458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-07-22 18:09:49.820469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.806 [2024-07-22 18:09:49.820477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.806 [2024-07-22 18:09:49.820629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.806 [2024-07-22 18:09:49.820763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.806 [2024-07-22 18:09:49.820772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.806 [2024-07-22 18:09:49.820778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.806 [2024-07-22 18:09:49.822838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.806 [2024-07-22 18:09:49.832203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.806 [2024-07-22 18:09:49.832664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-07-22 18:09:49.833018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-07-22 18:09:49.833028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.806 [2024-07-22 18:09:49.833036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.806 [2024-07-22 18:09:49.833156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.806 [2024-07-22 18:09:49.833290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.806 [2024-07-22 18:09:49.833297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.806 [2024-07-22 18:09:49.833304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.806 [2024-07-22 18:09:49.835440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.806 [2024-07-22 18:09:49.844491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.806 [2024-07-22 18:09:49.845002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-07-22 18:09:49.845362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.806 [2024-07-22 18:09:49.845374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.806 [2024-07-22 18:09:49.845381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.806 [2024-07-22 18:09:49.845499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.806 [2024-07-22 18:09:49.845651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.806 [2024-07-22 18:09:49.845658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.806 [2024-07-22 18:09:49.845665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.806 [2024-07-22 18:09:49.847727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.856816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.857336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.857679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.857690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.857697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.857866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.857982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.857990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.857997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.860156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.869179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.869697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.869992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.870001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.870010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.870144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.870295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.870303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.870309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.872353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.881545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.882054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.882276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.882289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.882296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.882390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.882524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.882532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.882539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.884619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.893868] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.894402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.894744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.894756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.894764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.894866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.895035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.895043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.895051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.897230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.906201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.906702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.907091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.907101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.907108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.907241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.907415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.907424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.907430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.909503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.918574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.919127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.919475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.919486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.919493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.919627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.919744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.919751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.919758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.921880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.931004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.931614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.932037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.932056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.932068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.932218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.932322] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.932330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.932337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.934461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.943257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.943784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.943966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.943979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.943988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.807 [2024-07-22 18:09:49.944109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.807 [2024-07-22 18:09:49.944228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.807 [2024-07-22 18:09:49.944237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.807 [2024-07-22 18:09:49.944245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.807 [2024-07-22 18:09:49.946343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.807 [2024-07-22 18:09:49.955485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.807 [2024-07-22 18:09:49.955848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.956210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.807 [2024-07-22 18:09:49.956219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.807 [2024-07-22 18:09:49.956227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:49.956370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:49.956523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:49.956531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:49.956538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:49.958732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:49.967763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:49.968392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:49.968718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:49.968732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:49.968749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:49.968952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:49.969056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:49.969063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:49.969071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:49.971203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:49.980148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:49.980648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:49.980981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:49.980991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:49.980999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:49.981135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:49.981252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:49.981260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:49.981267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:49.983458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:49.992550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:49.993184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:49.993579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:49.993595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:49.993606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:49.993790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:49.993912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:49.993920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:49.993927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:49.995880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:50.005295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:50.005761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.006000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.006012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:50.006021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:50.006165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:50.006266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:50.006275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:50.006283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:50.008402] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:50.017717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:50.018119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.018379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.018391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:50.018399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:50.018500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:50.018686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:50.018695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:50.018702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:50.020847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:50.029981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:50.030445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.030824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.030835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:50.030844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:50.031045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:50.031164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:50.031173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:50.031182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:50.033272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:50.042318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:50.042922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.043314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.043334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:50.043365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:50.043559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:50.043714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:50.043724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.808 [2024-07-22 18:09:50.043732] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.808 [2024-07-22 18:09:50.045753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.808 [2024-07-22 18:09:50.054564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.808 [2024-07-22 18:09:50.055120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.055525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.808 [2024-07-22 18:09:50.055541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.808 [2024-07-22 18:09:50.055552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.808 [2024-07-22 18:09:50.055702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.808 [2024-07-22 18:09:50.055806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.808 [2024-07-22 18:09:50.055815] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.809 [2024-07-22 18:09:50.055822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.809 [2024-07-22 18:09:50.057861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.809 [2024-07-22 18:09:50.066894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.809 [2024-07-22 18:09:50.067408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-07-22 18:09:50.067771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-07-22 18:09:50.067782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.809 [2024-07-22 18:09:50.067790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.809 [2024-07-22 18:09:50.067908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.809 [2024-07-22 18:09:50.068061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.809 [2024-07-22 18:09:50.068069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.809 [2024-07-22 18:09:50.068076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:45.809 [2024-07-22 18:09:50.070430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:45.809 [2024-07-22 18:09:50.079086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:45.809 [2024-07-22 18:09:50.079667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-07-22 18:09:50.080099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:45.809 [2024-07-22 18:09:50.080113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:45.809 [2024-07-22 18:09:50.080125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:45.809 [2024-07-22 18:09:50.080276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:45.809 [2024-07-22 18:09:50.080392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:45.809 [2024-07-22 18:09:50.080408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:45.809 [2024-07-22 18:09:50.080415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.072 [2024-07-22 18:09:50.082521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.072 [2024-07-22 18:09:50.091369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.072 [2024-07-22 18:09:50.091997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-07-22 18:09:50.092411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-07-22 18:09:50.092428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.072 [2024-07-22 18:09:50.092439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.072 [2024-07-22 18:09:50.092592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.072 [2024-07-22 18:09:50.092735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.072 [2024-07-22 18:09:50.092744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.072 [2024-07-22 18:09:50.092752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.072 [2024-07-22 18:09:50.094897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.072 [2024-07-22 18:09:50.103676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.072 [2024-07-22 18:09:50.104295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-07-22 18:09:50.104715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-07-22 18:09:50.104729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.072 [2024-07-22 18:09:50.104740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.072 [2024-07-22 18:09:50.104927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.072 [2024-07-22 18:09:50.105048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.072 [2024-07-22 18:09:50.105057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.072 [2024-07-22 18:09:50.105064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.072 [2024-07-22 18:09:50.107054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.072 [2024-07-22 18:09:50.115987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.072 [2024-07-22 18:09:50.116480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-07-22 18:09:50.116863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.072 [2024-07-22 18:09:50.116878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.072 [2024-07-22 18:09:50.116889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.072 [2024-07-22 18:09:50.117055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.072 [2024-07-22 18:09:50.117194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.072 [2024-07-22 18:09:50.117203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.072 [2024-07-22 18:09:50.117217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.119329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.128365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.128958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.129368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.129383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.129394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.129537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.129640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.129649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.129656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.131741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.140707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.141208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.141516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.141526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.141534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.141720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.141854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.141861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.141868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.143856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.153101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.153609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.153929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.153938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.153946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.154114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.154264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.154272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.154278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.156497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.165604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.166054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.166387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.166399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.166406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.166558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.166658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.166667] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.166674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.168578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.177782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.178250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.178576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.178586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.178594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.178729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.178896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.178904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.178911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.180972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.190183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.190588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.190911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.190921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.190930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.191033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.191152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.191160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.191168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.193118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.202490] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.203012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.203329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.203339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.203346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.203509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.203675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.203683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.203690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.205747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.214888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.215440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.215850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.215860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.215869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.216039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.216174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.216182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.216189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.218500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.227025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.227503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.227961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.073 [2024-07-22 18:09:50.227975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.073 [2024-07-22 18:09:50.227985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.073 [2024-07-22 18:09:50.228171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.073 [2024-07-22 18:09:50.228327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.073 [2024-07-22 18:09:50.228335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.073 [2024-07-22 18:09:50.228343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.073 [2024-07-22 18:09:50.230511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.073 [2024-07-22 18:09:50.239497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.073 [2024-07-22 18:09:50.239892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.240134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.240145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.240153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.240272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.240434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.240442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.240450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.242547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.251998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.252477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.252878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.252892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.252903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.253053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.253192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.253199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.253207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.255467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.264299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.264889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.265251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.265265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.265276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.265452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.265620] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.265628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.265636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.267674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.276604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.277181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.277583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.277608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.277620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.277805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.277944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.277952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.277959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.280099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.288957] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.289460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.289839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.289853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.289864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.290032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.290187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.290196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.290204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.292300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.301315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.301830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.302216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.302230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.302241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.302493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.302598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.302607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.302614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.304634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.313550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.314166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.314561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.314577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.314594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.314779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.314901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.314910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.314918] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.316888] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.325998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.326645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.327027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.327041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.327052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.327203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.327341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.327366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.327375] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.329429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.074 [2024-07-22 18:09:50.338282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.074 [2024-07-22 18:09:50.338854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.339223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.074 [2024-07-22 18:09:50.339235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.074 [2024-07-22 18:09:50.339246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.074 [2024-07-22 18:09:50.339419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.074 [2024-07-22 18:09:50.339558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.074 [2024-07-22 18:09:50.339566] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.074 [2024-07-22 18:09:50.339574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.074 [2024-07-22 18:09:50.341540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.350570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.351046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.351421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.351437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.351447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.336 [2024-07-22 18:09:50.351597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.336 [2024-07-22 18:09:50.351752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.336 [2024-07-22 18:09:50.351760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.336 [2024-07-22 18:09:50.351767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.336 [2024-07-22 18:09:50.353998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.362867] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.363461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.363812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.363825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.363834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.336 [2024-07-22 18:09:50.363958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.336 [2024-07-22 18:09:50.364077] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.336 [2024-07-22 18:09:50.364085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.336 [2024-07-22 18:09:50.364092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.336 [2024-07-22 18:09:50.366160] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.375239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.375753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.376110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.376122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.376132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.336 [2024-07-22 18:09:50.376323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.336 [2024-07-22 18:09:50.376471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.336 [2024-07-22 18:09:50.376479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.336 [2024-07-22 18:09:50.376488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.336 [2024-07-22 18:09:50.378614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.387447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.387980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.388224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.388236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.388246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.336 [2024-07-22 18:09:50.388361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.336 [2024-07-22 18:09:50.388504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.336 [2024-07-22 18:09:50.388513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.336 [2024-07-22 18:09:50.388520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.336 [2024-07-22 18:09:50.390530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.399785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.400201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.400449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.400464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.400473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.336 [2024-07-22 18:09:50.400645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.336 [2024-07-22 18:09:50.400781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.336 [2024-07-22 18:09:50.400789] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.336 [2024-07-22 18:09:50.400796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.336 [2024-07-22 18:09:50.402873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.412070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.412626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.412955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.412967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.412976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.336 [2024-07-22 18:09:50.413130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.336 [2024-07-22 18:09:50.413265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.336 [2024-07-22 18:09:50.413273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.336 [2024-07-22 18:09:50.413281] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.336 [2024-07-22 18:09:50.415395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.336 [2024-07-22 18:09:50.424435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.336 [2024-07-22 18:09:50.424847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.425144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.336 [2024-07-22 18:09:50.425157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.336 [2024-07-22 18:09:50.425166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.425302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.425430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.425444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.425451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.427523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.436695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.437217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.437560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.437574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.437584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.437737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.437890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.437897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.437904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.439874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.449130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.449685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.450004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.450016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.450025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.450194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.450372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.450381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.450388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.452471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.461469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.462007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.462380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.462393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.462402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.462587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.462722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.462730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.462741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.464631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.473888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.474404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.474739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.474752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.474761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.474894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.474996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.475003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.475010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.477137] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.486005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.486471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.486806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.486817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.486826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.486960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.487078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.487085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.487092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.489086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.498281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.498876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.499104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.499116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.499125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.499241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.499417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.499425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.499432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.501472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.510628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.511063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.511358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.511368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.511375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.511558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.511691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.511698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.511705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.513500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.522873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.523379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.523656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.523668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.523678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.523795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.523929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.523937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.337 [2024-07-22 18:09:50.523944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.337 [2024-07-22 18:09:50.526077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.337 [2024-07-22 18:09:50.535131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.337 [2024-07-22 18:09:50.535576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.535883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.337 [2024-07-22 18:09:50.535894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.337 [2024-07-22 18:09:50.535903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.337 [2024-07-22 18:09:50.536106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.337 [2024-07-22 18:09:50.536224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.337 [2024-07-22 18:09:50.536232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.536238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.338 [2024-07-22 18:09:50.538178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.338 [2024-07-22 18:09:50.547673] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.338 [2024-07-22 18:09:50.548200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.548438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.548451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.338 [2024-07-22 18:09:50.548460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.338 [2024-07-22 18:09:50.548626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.338 [2024-07-22 18:09:50.548711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.338 [2024-07-22 18:09:50.548719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.548726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.338 [2024-07-22 18:09:50.550693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.338 [2024-07-22 18:09:50.560120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.338 [2024-07-22 18:09:50.560703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.561021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.561033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.338 [2024-07-22 18:09:50.561041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.338 [2024-07-22 18:09:50.561226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.338 [2024-07-22 18:09:50.561328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.338 [2024-07-22 18:09:50.561336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.561343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.338 [2024-07-22 18:09:50.563568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.338 [2024-07-22 18:09:50.572493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.338 [2024-07-22 18:09:50.573012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.573389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.573402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.338 [2024-07-22 18:09:50.573411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.338 [2024-07-22 18:09:50.573529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.338 [2024-07-22 18:09:50.573664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.338 [2024-07-22 18:09:50.573671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.573678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.338 [2024-07-22 18:09:50.575682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.338 [2024-07-22 18:09:50.584859] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.338 [2024-07-22 18:09:50.585361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.585593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.585603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.338 [2024-07-22 18:09:50.585610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.338 [2024-07-22 18:09:50.585742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.338 [2024-07-22 18:09:50.585857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.338 [2024-07-22 18:09:50.585864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.585871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.338 [2024-07-22 18:09:50.587782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.338 [2024-07-22 18:09:50.597096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.338 [2024-07-22 18:09:50.597617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.597891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.597903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.338 [2024-07-22 18:09:50.597913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.338 [2024-07-22 18:09:50.598047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.338 [2024-07-22 18:09:50.598165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.338 [2024-07-22 18:09:50.598173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.598180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.338 [2024-07-22 18:09:50.600444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.338 [2024-07-22 18:09:50.609175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.338 [2024-07-22 18:09:50.609764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.610068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.338 [2024-07-22 18:09:50.610080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.338 [2024-07-22 18:09:50.610089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.338 [2024-07-22 18:09:50.610240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.338 [2024-07-22 18:09:50.610341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.338 [2024-07-22 18:09:50.610357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.338 [2024-07-22 18:09:50.610365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.612433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.621420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.622039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.622362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.622379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.601 [2024-07-22 18:09:50.622388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.601 [2024-07-22 18:09:50.622556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.601 [2024-07-22 18:09:50.622709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.601 [2024-07-22 18:09:50.622716] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.601 [2024-07-22 18:09:50.622723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.624775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.633779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.634367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.634695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.634707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.601 [2024-07-22 18:09:50.634716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.601 [2024-07-22 18:09:50.634832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.601 [2024-07-22 18:09:50.634984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.601 [2024-07-22 18:09:50.634991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.601 [2024-07-22 18:09:50.634998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.636886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.646158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.646662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.646984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.646996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.601 [2024-07-22 18:09:50.647005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.601 [2024-07-22 18:09:50.647173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.601 [2024-07-22 18:09:50.647257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.601 [2024-07-22 18:09:50.647265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.601 [2024-07-22 18:09:50.647272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.649298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.658464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.659001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.659373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.659386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.601 [2024-07-22 18:09:50.659399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.601 [2024-07-22 18:09:50.659567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.601 [2024-07-22 18:09:50.659686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.601 [2024-07-22 18:09:50.659694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.601 [2024-07-22 18:09:50.659701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.661794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.670696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.671164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.671589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.671624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.601 [2024-07-22 18:09:50.671634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.601 [2024-07-22 18:09:50.671786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.601 [2024-07-22 18:09:50.671904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.601 [2024-07-22 18:09:50.671912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.601 [2024-07-22 18:09:50.671919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.673972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.683107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.683424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.683757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.601 [2024-07-22 18:09:50.683766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.601 [2024-07-22 18:09:50.683773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.601 [2024-07-22 18:09:50.683889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.601 [2024-07-22 18:09:50.684021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.601 [2024-07-22 18:09:50.684028] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.601 [2024-07-22 18:09:50.684034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.601 [2024-07-22 18:09:50.686150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.601 [2024-07-22 18:09:50.695445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.601 [2024-07-22 18:09:50.696014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.696320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.696332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.696341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.696470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.696622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.696630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.696637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.698656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.707623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.708132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.708460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.708474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.708483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.708633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.708752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.708760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.708767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.710685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.720055] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.720534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.720929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.720941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.720950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.721083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.721219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.721227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.721234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.723081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.732289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.732747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.733121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.733133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.733141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.733240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.733421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.733430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.733438] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.735456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.744568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.745119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.745434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.745447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.745456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.745607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.745742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.745750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.745756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.747811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.756886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.757452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.757663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.757675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.757684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.757782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.757918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.757925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.757932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.760093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.769286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.769780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.770075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.770088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.770097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.770264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.770408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.770421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.770428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.772447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.781599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.782029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.782326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.782338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.782347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.782507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.782643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.782651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.782658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.784526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.793841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.794365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.794690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.794702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.602 [2024-07-22 18:09:50.794711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.602 [2024-07-22 18:09:50.794862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.602 [2024-07-22 18:09:50.795015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.602 [2024-07-22 18:09:50.795023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.602 [2024-07-22 18:09:50.795030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.602 [2024-07-22 18:09:50.797153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.602 [2024-07-22 18:09:50.806156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.602 [2024-07-22 18:09:50.806657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.806973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.602 [2024-07-22 18:09:50.806985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.603 [2024-07-22 18:09:50.806993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.603 [2024-07-22 18:09:50.807127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.603 [2024-07-22 18:09:50.807279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.603 [2024-07-22 18:09:50.807287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.603 [2024-07-22 18:09:50.807298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.603 [2024-07-22 18:09:50.809612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.603 [2024-07-22 18:09:50.818518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.603 [2024-07-22 18:09:50.819106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.819408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.819422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.603 [2024-07-22 18:09:50.819431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.603 [2024-07-22 18:09:50.819615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.603 [2024-07-22 18:09:50.819768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.603 [2024-07-22 18:09:50.819776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.603 [2024-07-22 18:09:50.819783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.603 [2024-07-22 18:09:50.821731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.603 [2024-07-22 18:09:50.830606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.603 [2024-07-22 18:09:50.831150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.831467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.831481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.603 [2024-07-22 18:09:50.831490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.603 [2024-07-22 18:09:50.831658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.603 [2024-07-22 18:09:50.831759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.603 [2024-07-22 18:09:50.831766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.603 [2024-07-22 18:09:50.831773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.603 [2024-07-22 18:09:50.833946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.603 [2024-07-22 18:09:50.842947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.603 [2024-07-22 18:09:50.843380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.843691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.843700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.603 [2024-07-22 18:09:50.843707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.603 [2024-07-22 18:09:50.843806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.603 [2024-07-22 18:09:50.843938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.603 [2024-07-22 18:09:50.843945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.603 [2024-07-22 18:09:50.843951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.603 [2024-07-22 18:09:50.846041] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.603 [2024-07-22 18:09:50.855355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.603 [2024-07-22 18:09:50.855885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.856200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.856212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.603 [2024-07-22 18:09:50.856221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.603 [2024-07-22 18:09:50.856338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.603 [2024-07-22 18:09:50.856481] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.603 [2024-07-22 18:09:50.856489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.603 [2024-07-22 18:09:50.856496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.603 [2024-07-22 18:09:50.858681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.603 [2024-07-22 18:09:50.867615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.603 [2024-07-22 18:09:50.868047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.868337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.603 [2024-07-22 18:09:50.868346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.603 [2024-07-22 18:09:50.868361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.603 [2024-07-22 18:09:50.868511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.603 [2024-07-22 18:09:50.868627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.603 [2024-07-22 18:09:50.868635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.603 [2024-07-22 18:09:50.868641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.603 [2024-07-22 18:09:50.870756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.865 [2024-07-22 18:09:50.880017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.865 [2024-07-22 18:09:50.880546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.865 [2024-07-22 18:09:50.880858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.865 [2024-07-22 18:09:50.880870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.865 [2024-07-22 18:09:50.880879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.865 [2024-07-22 18:09:50.881029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.865 [2024-07-22 18:09:50.881164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.865 [2024-07-22 18:09:50.881171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.865 [2024-07-22 18:09:50.881178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.865 [2024-07-22 18:09:50.883181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.865 [2024-07-22 18:09:50.892379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.865 [2024-07-22 18:09:50.892974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.865 [2024-07-22 18:09:50.893220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.865 [2024-07-22 18:09:50.893232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.865 [2024-07-22 18:09:50.893241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.865 [2024-07-22 18:09:50.893418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.865 [2024-07-22 18:09:50.893555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.865 [2024-07-22 18:09:50.893562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.865 [2024-07-22 18:09:50.893569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.895550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.904651] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.905001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.905304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.905313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.905320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.905441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.905573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.905581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.905587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.907601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.916833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.917306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.917637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.917647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.917654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.917769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.917884] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.917891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.917898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.919877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.929008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.929545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.929859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.929871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.929880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.929997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.930097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.930105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.930112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.932407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.941288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.941726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.942056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.942065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.942072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.942221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.942376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.942384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.942391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.944622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.953754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.954228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.954567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.954577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.954584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.954699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.954831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.954838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.954845] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.956960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.965814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.966251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.966574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.966588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.966595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.966711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.966843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.966850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.966856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.969007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.978075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.978655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.978955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.978969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.978979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.979098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.979216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.979224] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.979231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.981355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:50.990317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:50.990755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.991061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:50.991069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:50.991076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:50.991225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:50.991363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:50.991372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:50.991378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:50.993476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:51.002464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.866 [2024-07-22 18:09:51.002895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:51.003197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.866 [2024-07-22 18:09:51.003205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.866 [2024-07-22 18:09:51.003216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.866 [2024-07-22 18:09:51.003332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.866 [2024-07-22 18:09:51.003436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.866 [2024-07-22 18:09:51.003444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.866 [2024-07-22 18:09:51.003450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.866 [2024-07-22 18:09:51.005468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.866 [2024-07-22 18:09:51.014815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.015294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.015600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.015610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.015617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.015749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.015914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.015922] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.015928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.017754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.027324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.027793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.028095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.028104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.028111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.028243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.028365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.028373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.028379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.030396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.039699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.040163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.040363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.040374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.040380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.040517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.040667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.040674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.040680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.042746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.052199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.052753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.052979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.052991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.053000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.053150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.053285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.053293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.053299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.055394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.064638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.065190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.065579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.065592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.065601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.065701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.065871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.065879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.065885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.067957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.076905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.077443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.077779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.077791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.077800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.077985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.078124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.078132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.078139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.080130] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.089090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.089668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.090041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.090053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.090062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.090229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.090357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.090366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.090373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.092424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.101570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.102148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.102361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.102374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.102383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.102516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.102635] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.102643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.102650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.104752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.113975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.114504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.114857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.114869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.867 [2024-07-22 18:09:51.114878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.867 [2024-07-22 18:09:51.115028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.867 [2024-07-22 18:09:51.115181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.867 [2024-07-22 18:09:51.115193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.867 [2024-07-22 18:09:51.115200] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.867 [2024-07-22 18:09:51.117376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.867 [2024-07-22 18:09:51.126250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.867 [2024-07-22 18:09:51.126823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.867 [2024-07-22 18:09:51.127135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.868 [2024-07-22 18:09:51.127147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.868 [2024-07-22 18:09:51.127156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.868 [2024-07-22 18:09:51.127289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.868 [2024-07-22 18:09:51.127398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.868 [2024-07-22 18:09:51.127407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.868 [2024-07-22 18:09:51.127413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:46.868 [2024-07-22 18:09:51.129585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:46.868 [2024-07-22 18:09:51.138536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:46.868 [2024-07-22 18:09:51.139031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.868 [2024-07-22 18:09:51.139297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:46.868 [2024-07-22 18:09:51.139307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:46.868 [2024-07-22 18:09:51.139314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:46.868 [2024-07-22 18:09:51.139419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:46.868 [2024-07-22 18:09:51.139570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:46.868 [2024-07-22 18:09:51.139578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:46.868 [2024-07-22 18:09:51.139585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.130 [2024-07-22 18:09:51.141815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.130 [2024-07-22 18:09:51.150788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.130 [2024-07-22 18:09:51.151177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.151516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.151526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.130 [2024-07-22 18:09:51.151532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.130 [2024-07-22 18:09:51.151665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.130 [2024-07-22 18:09:51.151797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.130 [2024-07-22 18:09:51.151804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.130 [2024-07-22 18:09:51.151815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.130 [2024-07-22 18:09:51.153946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.130 [2024-07-22 18:09:51.163056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.130 [2024-07-22 18:09:51.163645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.163964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.163976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.130 [2024-07-22 18:09:51.163985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.130 [2024-07-22 18:09:51.164119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.130 [2024-07-22 18:09:51.164238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.130 [2024-07-22 18:09:51.164246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.130 [2024-07-22 18:09:51.164252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.130 [2024-07-22 18:09:51.166206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.130 [2024-07-22 18:09:51.175429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.130 [2024-07-22 18:09:51.175983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.176262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.176274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.130 [2024-07-22 18:09:51.176283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.130 [2024-07-22 18:09:51.176443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.130 [2024-07-22 18:09:51.176595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.130 [2024-07-22 18:09:51.176603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.130 [2024-07-22 18:09:51.176610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.130 [2024-07-22 18:09:51.178511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.130 [2024-07-22 18:09:51.187669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.130 [2024-07-22 18:09:51.188199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.188536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.188549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.130 [2024-07-22 18:09:51.188558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.130 [2024-07-22 18:09:51.188743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.130 [2024-07-22 18:09:51.188879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.130 [2024-07-22 18:09:51.188886] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.130 [2024-07-22 18:09:51.188893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.130 [2024-07-22 18:09:51.190930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.130 [2024-07-22 18:09:51.200113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.130 [2024-07-22 18:09:51.200663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.200975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.200987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.130 [2024-07-22 18:09:51.200996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.130 [2024-07-22 18:09:51.201181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.130 [2024-07-22 18:09:51.201315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.130 [2024-07-22 18:09:51.201323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.130 [2024-07-22 18:09:51.201330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.130 [2024-07-22 18:09:51.203457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.130 [2024-07-22 18:09:51.212410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.130 [2024-07-22 18:09:51.212987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.213301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.130 [2024-07-22 18:09:51.213313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.130 [2024-07-22 18:09:51.213322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.130 [2024-07-22 18:09:51.213463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.213582] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.213590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.213597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.215603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.224734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.225222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.225554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.225568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.225577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.225694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.225829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.225837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.225844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.228115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.237059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.237376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.237721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.237730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.237737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.237869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.237984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.237992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.237998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.240330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.249542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.250124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.250506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.250520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.250529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.250663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.250816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.250823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.250830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.252967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.261871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.262301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.262667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.262677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.262685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.262819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.262969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.262977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.262984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.265149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.274163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.274659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.274952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.274961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.274968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.275117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.275282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.275289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.275295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.277238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.286497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.286872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.287173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.287182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.287188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.287303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.287406] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.287414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.287421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.289555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.298767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.299319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.299723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.299736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.299745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.299913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.300032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.300039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.300046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.301984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.310902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.311321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.311642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.311652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.311659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.311792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.131 [2024-07-22 18:09:51.311941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.131 [2024-07-22 18:09:51.311948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.131 [2024-07-22 18:09:51.311955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.131 [2024-07-22 18:09:51.313987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.131 [2024-07-22 18:09:51.323311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.131 [2024-07-22 18:09:51.323775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.324065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.131 [2024-07-22 18:09:51.324074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.131 [2024-07-22 18:09:51.324080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.131 [2024-07-22 18:09:51.324229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.324366] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.324374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.324380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.326613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.132 [2024-07-22 18:09:51.335571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.132 [2024-07-22 18:09:51.335996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.336307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.336315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.132 [2024-07-22 18:09:51.336322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.132 [2024-07-22 18:09:51.336477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.336644] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.336651] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.336658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.338753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.132 [2024-07-22 18:09:51.347969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.132 [2024-07-22 18:09:51.348411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.348750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.348759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.132 [2024-07-22 18:09:51.348769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.132 [2024-07-22 18:09:51.348884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.349033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.349041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.349047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.351294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.132 [2024-07-22 18:09:51.360281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.132 [2024-07-22 18:09:51.360764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.360988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.360997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.132 [2024-07-22 18:09:51.361004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.132 [2024-07-22 18:09:51.361152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.361300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.361307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.361314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.363366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.132 [2024-07-22 18:09:51.372556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.132 [2024-07-22 18:09:51.373012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.373321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.373330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.132 [2024-07-22 18:09:51.373336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.132 [2024-07-22 18:09:51.373456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.373537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.373544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.373550] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.375699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.132 [2024-07-22 18:09:51.384796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.132 [2024-07-22 18:09:51.385231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.385650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.385660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.132 [2024-07-22 18:09:51.385667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.132 [2024-07-22 18:09:51.385836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.385969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.385976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.385982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.388081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.132 [2024-07-22 18:09:51.396892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.132 [2024-07-22 18:09:51.397414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.397738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.132 [2024-07-22 18:09:51.397749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.132 [2024-07-22 18:09:51.397758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.132 [2024-07-22 18:09:51.397892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.132 [2024-07-22 18:09:51.398027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.132 [2024-07-22 18:09:51.398035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.132 [2024-07-22 18:09:51.398043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.132 [2024-07-22 18:09:51.400136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.394 [2024-07-22 18:09:51.409114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.394 [2024-07-22 18:09:51.409592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.394 [2024-07-22 18:09:51.409876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.394 [2024-07-22 18:09:51.409885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.394 [2024-07-22 18:09:51.409892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.394 [2024-07-22 18:09:51.410008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.394 [2024-07-22 18:09:51.410191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.394 [2024-07-22 18:09:51.410198] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.394 [2024-07-22 18:09:51.410204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.394 [2024-07-22 18:09:51.412198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.394 [2024-07-22 18:09:51.421419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.394 [2024-07-22 18:09:51.421799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.394 [2024-07-22 18:09:51.422120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.394 [2024-07-22 18:09:51.422128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.394 [2024-07-22 18:09:51.422135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.394 [2024-07-22 18:09:51.422284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.394 [2024-07-22 18:09:51.422412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.394 [2024-07-22 18:09:51.422420] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.394 [2024-07-22 18:09:51.422426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.394 [2024-07-22 18:09:51.424494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.394 [2024-07-22 18:09:51.433556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.394 [2024-07-22 18:09:51.433994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.394 [2024-07-22 18:09:51.434296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.394 [2024-07-22 18:09:51.434305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.394 [2024-07-22 18:09:51.434312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.394 [2024-07-22 18:09:51.434466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.394 [2024-07-22 18:09:51.434616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.394 [2024-07-22 18:09:51.434624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.394 [2024-07-22 18:09:51.434630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.436562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.445817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.446219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.446494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.446503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.446510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.446608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.446723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.446731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.446737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.448853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.458063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.458402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.458725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.458733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.458740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.458857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.458972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.458982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.458989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.461136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.470415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.470946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.471159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.471168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.471175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.471308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.471445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.471453] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.471459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.473735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.482700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.483231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.483579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.483592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.483602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.483736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.483889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.483897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.483904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.485941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.495047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.495470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.495803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.495812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.495819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.495987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.496119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.496126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.496136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.498302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.507304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.507726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.508037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.508046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.508053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.508152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.508250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.508258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.508264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.510311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.519606] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.520046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.520383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.520396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.520405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.520556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.520675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.520683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.520690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.522917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.531900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.532380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.532707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.532716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.532723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.532890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.533057] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.533064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.533071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.535089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.544342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.544767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.545095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.545105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.545112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.545261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.545364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.545371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.545378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.547494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.556677] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.557196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.557511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.557525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.557534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.557684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.557786] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.557794] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.395 [2024-07-22 18:09:51.557801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.395 [2024-07-22 18:09:51.559875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.395 [2024-07-22 18:09:51.568981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.395 [2024-07-22 18:09:51.569425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.569757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.395 [2024-07-22 18:09:51.569767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.395 [2024-07-22 18:09:51.569774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.395 [2024-07-22 18:09:51.569907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.395 [2024-07-22 18:09:51.570005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.395 [2024-07-22 18:09:51.570012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.570018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.572067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.581164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.581682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.582024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.582036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.582044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.582161] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.582346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.582360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.582368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.584589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.593532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.593934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.594280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.594290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.594297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.594400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.594499] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.594506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.594513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.596597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.605835] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.606271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.606464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.606475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.606482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.606631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.606746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.606753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.606759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.608890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.618036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.618490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.618770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.618779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.618786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.618918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.619049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.619057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.619063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.621112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.630440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.630972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.631250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.631262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.631271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.631412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.631547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.631555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.631562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.633701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.642830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.643437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.643742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.643754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.643763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.643896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.644014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.644022] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.644029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.646067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.655183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.655680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.655975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.655987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.655996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.656130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.656281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.656289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.656296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.396 [2024-07-22 18:09:51.658315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.396 [2024-07-22 18:09:51.667545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.396 [2024-07-22 18:09:51.667970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.668301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.396 [2024-07-22 18:09:51.668310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.396 [2024-07-22 18:09:51.668317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.396 [2024-07-22 18:09:51.668455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.396 [2024-07-22 18:09:51.668621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.396 [2024-07-22 18:09:51.668628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.396 [2024-07-22 18:09:51.668635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.670595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.679990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.680455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.680788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.680800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.680808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.659 [2024-07-22 18:09:51.680959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.659 [2024-07-22 18:09:51.681111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.659 [2024-07-22 18:09:51.681118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.659 [2024-07-22 18:09:51.681126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.683321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.692473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.692884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.693145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.693154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.693165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.659 [2024-07-22 18:09:51.693315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.659 [2024-07-22 18:09:51.693435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.659 [2024-07-22 18:09:51.693444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.659 [2024-07-22 18:09:51.693450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.695566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.704540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.705090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.705430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.705444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.705453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.659 [2024-07-22 18:09:51.705621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.659 [2024-07-22 18:09:51.705757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.659 [2024-07-22 18:09:51.705764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.659 [2024-07-22 18:09:51.705771] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.707993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.716743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.717215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.717512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.717522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.717529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.659 [2024-07-22 18:09:51.717627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.659 [2024-07-22 18:09:51.717826] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.659 [2024-07-22 18:09:51.717833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.659 [2024-07-22 18:09:51.717840] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.719699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.729141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.729624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.729977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.729987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.729995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.659 [2024-07-22 18:09:51.730080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.659 [2024-07-22 18:09:51.730195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.659 [2024-07-22 18:09:51.730202] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.659 [2024-07-22 18:09:51.730210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.732241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.741591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.742071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.742391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.742411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.742423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.659 [2024-07-22 18:09:51.742614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.659 [2024-07-22 18:09:51.742749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.659 [2024-07-22 18:09:51.742760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.659 [2024-07-22 18:09:51.742770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.659 [2024-07-22 18:09:51.745097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.659 [2024-07-22 18:09:51.753804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.659 [2024-07-22 18:09:51.754281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.754648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.659 [2024-07-22 18:09:51.754658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.659 [2024-07-22 18:09:51.754665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.754782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.754896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.754903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.754909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.756863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.766155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.766706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.767073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.767085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.767094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.767227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.767376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.767385] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.767392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.769559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.778455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.778877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.779199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.779208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.779215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.779371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.779488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.779495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.779501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.781683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.790864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.791385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.791771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.791784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.791793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.791942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.792060] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.792068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.792075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.794128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.803025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.803423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.803713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.803722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.803729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.803845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.803994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.804006] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.804012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.806078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.815376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.815812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.816129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.816138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.816145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.816294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.816447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.816455] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.816461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.818488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.827753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.828189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.828501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.828511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.828517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.828684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.828835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.828842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.828848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.830789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.840153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.840756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.840978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.840990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.840999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.841149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.841319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.841327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.841338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.843566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.852568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.853128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.853502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.853515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.853524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.853675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.853811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.660 [2024-07-22 18:09:51.853818] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.660 [2024-07-22 18:09:51.853825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.660 [2024-07-22 18:09:51.855927] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.660 [2024-07-22 18:09:51.864847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.660 [2024-07-22 18:09:51.865280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.865629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.660 [2024-07-22 18:09:51.865638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.660 [2024-07-22 18:09:51.865645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.660 [2024-07-22 18:09:51.865795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.660 [2024-07-22 18:09:51.865911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.661 [2024-07-22 18:09:51.865918] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.661 [2024-07-22 18:09:51.865924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.661 [2024-07-22 18:09:51.867886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.661 [2024-07-22 18:09:51.877196] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.661 [2024-07-22 18:09:51.877568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.877774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.877783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.661 [2024-07-22 18:09:51.877789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.661 [2024-07-22 18:09:51.877888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.661 [2024-07-22 18:09:51.878036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.661 [2024-07-22 18:09:51.878043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.661 [2024-07-22 18:09:51.878049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.661 [2024-07-22 18:09:51.879966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.661 [2024-07-22 18:09:51.889674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.661 [2024-07-22 18:09:51.890134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.890437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.890447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.661 [2024-07-22 18:09:51.890454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.661 [2024-07-22 18:09:51.890586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.661 [2024-07-22 18:09:51.890718] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.661 [2024-07-22 18:09:51.890725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.661 [2024-07-22 18:09:51.890731] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.661 [2024-07-22 18:09:51.892875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.661 [2024-07-22 18:09:51.901924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.661 [2024-07-22 18:09:51.902444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.902802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.902814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.661 [2024-07-22 18:09:51.902823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.661 [2024-07-22 18:09:51.902940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.661 [2024-07-22 18:09:51.903075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.661 [2024-07-22 18:09:51.903083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.661 [2024-07-22 18:09:51.903090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.661 [2024-07-22 18:09:51.905332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.661 [2024-07-22 18:09:51.914157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.661 [2024-07-22 18:09:51.914676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.915084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.915096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.661 [2024-07-22 18:09:51.915106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.661 [2024-07-22 18:09:51.915290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.661 [2024-07-22 18:09:51.915450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.661 [2024-07-22 18:09:51.915459] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.661 [2024-07-22 18:09:51.915466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.661 [2024-07-22 18:09:51.917636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.661 [2024-07-22 18:09:51.926402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.661 [2024-07-22 18:09:51.926943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.927284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.661 [2024-07-22 18:09:51.927296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.661 [2024-07-22 18:09:51.927304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.661 [2024-07-22 18:09:51.927465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.661 [2024-07-22 18:09:51.927635] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.661 [2024-07-22 18:09:51.927643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.661 [2024-07-22 18:09:51.927649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.661 [2024-07-22 18:09:51.929721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:51.938666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:51.939110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.939312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.939322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:51.939329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:51.939502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:51.939600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:51.939608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:51.939614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:51.941812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:51.950896] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:51.951422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.951645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.951657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:51.951666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:51.951782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:51.951934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:51.951942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:51.951948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:51.954176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:51.963209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:51.963692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.963983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.963992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:51.963999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:51.964147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:51.964263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:51.964270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:51.964276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:51.966396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:51.975549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:51.976082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.976418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.976430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:51.976439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:51.976591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:51.976709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:51.976717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:51.976724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:51.978637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:51.987869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:51.988275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.988578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:51.988587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:51.988594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:51.988744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:51.988859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:51.988866] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:51.988873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:51.990952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:52.000181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:52.000619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.000953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.000965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:52.000974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:52.001142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:52.001260] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:52.001268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:52.001275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:52.003537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:52.012493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:52.013028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.013392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.013406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:52.013415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:52.013531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:52.013684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:52.013691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:52.013698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:52.015718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:52.024844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:52.025399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.025750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.025762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.924 [2024-07-22 18:09:52.025771] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.924 [2024-07-22 18:09:52.025921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.924 [2024-07-22 18:09:52.026090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.924 [2024-07-22 18:09:52.026098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.924 [2024-07-22 18:09:52.026104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.924 [2024-07-22 18:09:52.028158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.924 [2024-07-22 18:09:52.037388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.924 [2024-07-22 18:09:52.037985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.038324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.924 [2024-07-22 18:09:52.038336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.038358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.038492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.038593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.038601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.038607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.040676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 [2024-07-22 18:09:52.049650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.050074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.050445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.050457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.050464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.050648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.050747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.050754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.050760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.053063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 [2024-07-22 18:09:52.061967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.062419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.062730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.062739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.062746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.062878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.063044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.063051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.063057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.064935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 [2024-07-22 18:09:52.074451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.074958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.075164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.075176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.075185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.075339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.075483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.075492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.075498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.077493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 [2024-07-22 18:09:52.086703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.087289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.087642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.087655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.087664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.087797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.087950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.087957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.087964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.090119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 [2024-07-22 18:09:52.098984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.099478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.099791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.099802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.099812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.099945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.100097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.100105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.100111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.102304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 [2024-07-22 18:09:52.111384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.111962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.112272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.112284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.112293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.112523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.112659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.112667] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.112674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.114841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1882188 Killed "${NVMF_APP[@]}" "$@" 00:32:47.925 18:09:52 -- host/bdevperf.sh@36 -- # tgt_init 00:32:47.925 18:09:52 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:32:47.925 18:09:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:47.925 18:09:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:47.925 18:09:52 -- common/autotest_common.sh@10 -- # set +x 00:32:47.925 [2024-07-22 18:09:52.123558] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.124120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.124337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.124354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.124364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.124463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.925 [2024-07-22 18:09:52.124598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.925 [2024-07-22 18:09:52.124606] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.925 [2024-07-22 18:09:52.124613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.925 [2024-07-22 18:09:52.126734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.925 18:09:52 -- nvmf/common.sh@469 -- # nvmfpid=1883736 00:32:47.925 18:09:52 -- nvmf/common.sh@470 -- # waitforlisten 1883736 00:32:47.925 18:09:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:47.925 18:09:52 -- common/autotest_common.sh@819 -- # '[' -z 1883736 ']' 00:32:47.925 18:09:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.925 18:09:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:47.925 18:09:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.925 18:09:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:47.925 18:09:52 -- common/autotest_common.sh@10 -- # set +x 00:32:47.925 [2024-07-22 18:09:52.135714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.925 [2024-07-22 18:09:52.136176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.136618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.925 [2024-07-22 18:09:52.136653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.925 [2024-07-22 18:09:52.136663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.925 [2024-07-22 18:09:52.136797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.926 [2024-07-22 18:09:52.136932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.926 [2024-07-22 18:09:52.136945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.926 [2024-07-22 18:09:52.136953] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.926 [2024-07-22 18:09:52.139165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.926 [2024-07-22 18:09:52.147968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.926 [2024-07-22 18:09:52.148461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.148828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.148840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.926 [2024-07-22 18:09:52.148849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.926 [2024-07-22 18:09:52.149017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.926 [2024-07-22 18:09:52.149136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.926 [2024-07-22 18:09:52.149144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.926 [2024-07-22 18:09:52.149150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.926 [2024-07-22 18:09:52.151324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.926 [2024-07-22 18:09:52.160303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.926 [2024-07-22 18:09:52.160860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.161207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.161218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.926 [2024-07-22 18:09:52.161227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.926 [2024-07-22 18:09:52.161423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.926 [2024-07-22 18:09:52.161627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.926 [2024-07-22 18:09:52.161635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.926 [2024-07-22 18:09:52.161641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.926 [2024-07-22 18:09:52.163576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.926 [2024-07-22 18:09:52.172756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.926 [2024-07-22 18:09:52.173198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.173656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.173691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.926 [2024-07-22 18:09:52.173701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.926 [2024-07-22 18:09:52.173834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.926 [2024-07-22 18:09:52.173969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.926 [2024-07-22 18:09:52.173977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.926 [2024-07-22 18:09:52.173988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.926 [2024-07-22 18:09:52.176111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.926 [2024-07-22 18:09:52.184700] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:47.926 [2024-07-22 18:09:52.184749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.926 [2024-07-22 18:09:52.185102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.926 [2024-07-22 18:09:52.185648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.185990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:47.926 [2024-07-22 18:09:52.186003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:47.926 [2024-07-22 18:09:52.186012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:47.926 [2024-07-22 18:09:52.186180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:47.926 [2024-07-22 18:09:52.186299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:47.926 [2024-07-22 18:09:52.186307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:47.926 [2024-07-22 18:09:52.186314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:47.926 [2024-07-22 18:09:52.188437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:47.926 [2024-07-22 18:09:52.197562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:47.926 [2024-07-22 18:09:52.198006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.188 [2024-07-22 18:09:52.198344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.188 [2024-07-22 18:09:52.198358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.188 [2024-07-22 18:09:52.198366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.188 [2024-07-22 18:09:52.198516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.188 [2024-07-22 18:09:52.198633] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.188 [2024-07-22 18:09:52.198640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.188 [2024-07-22 18:09:52.198647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.188 [2024-07-22 18:09:52.200672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.188 [2024-07-22 18:09:52.209897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.188 [2024-07-22 18:09:52.210343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.188 [2024-07-22 18:09:52.210829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.188 [2024-07-22 18:09:52.210864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.188 [2024-07-22 18:09:52.210874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.188 [2024-07-22 18:09:52.210990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.188 [2024-07-22 18:09:52.211130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.188 [2024-07-22 18:09:52.211138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.188 [2024-07-22 18:09:52.211146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.188 [2024-07-22 18:09:52.213234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.188 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.188 [2024-07-22 18:09:52.222136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.188 [2024-07-22 18:09:52.222612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.188 [2024-07-22 18:09:52.222949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.188 [2024-07-22 18:09:52.222958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.188 [2024-07-22 18:09:52.222966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.188 [2024-07-22 18:09:52.223082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.188 [2024-07-22 18:09:52.223216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.188 [2024-07-22 18:09:52.223223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.188 [2024-07-22 18:09:52.223230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.188 [2024-07-22 18:09:52.225209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.234442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.234976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.235212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.235224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.235233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.235390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.235510] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.235518] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.235525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.237645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.246767] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.247212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.247563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.247576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.247585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.247753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.247888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.247900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.247907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.249826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.252436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:48.189 [2024-07-22 18:09:52.259046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.259638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.260040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.260052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.260061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.260195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.260354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.260363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.260371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.262452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.271327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.271744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.272091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.272103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.272112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.272228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.272370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.272380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.272387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.274492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.283676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.284136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.284460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.284470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.284478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.284662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.284762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.284769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.284781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.286813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.296196] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.296702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.297028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.297037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.297044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.297177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.297275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.297282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.297289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.299250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.308565] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.309089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.309337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.309356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.309366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.309503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.309638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.309646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.309654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.311320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:48.189 [2024-07-22 18:09:52.311430] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:48.189 [2024-07-22 18:09:52.311438] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:48.189 [2024-07-22 18:09:52.311444] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:48.189 [2024-07-22 18:09:52.311531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:48.189 [2024-07-22 18:09:52.311631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:48.189 [2024-07-22 18:09:52.311634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.189 [2024-07-22 18:09:52.311706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.320814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.321386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.321732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.321749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.321758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.321916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.189 [2024-07-22 18:09:52.322035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.189 [2024-07-22 18:09:52.322043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.189 [2024-07-22 18:09:52.322050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.189 [2024-07-22 18:09:52.324107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.189 [2024-07-22 18:09:52.332982] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.189 [2024-07-22 18:09:52.333652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.333972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.189 [2024-07-22 18:09:52.333984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.189 [2024-07-22 18:09:52.333993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.189 [2024-07-22 18:09:52.334114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.334215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.334223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.334230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.336251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.345401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.345878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.346207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.346219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.346228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.346369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.346522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.346530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.346537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.348743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.357845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.358301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.358611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.358622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.358634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.358751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.358917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.358924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.358931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.360958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.370063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.370640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.370960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.370972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.370981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.371115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.371285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.371294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.371301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.373222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.382253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.382674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.382996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.383008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.383016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.383150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.383285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.383293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.383300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.385339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.394309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.394786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.395080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.395089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.395096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.395233] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.395352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.395360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.395366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.397498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.406530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.406882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.407040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.407049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.407056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.407171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.407321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.407328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.407334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.409417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.418795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.419145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.419206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.419215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.419222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.419360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.419494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.419501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.419508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.421690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.431094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.431677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.431910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.431922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.431931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.432081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.432237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.432245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.190 [2024-07-22 18:09:52.432253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.190 [2024-07-22 18:09:52.434465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.190 [2024-07-22 18:09:52.443467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.190 [2024-07-22 18:09:52.444035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.444370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.190 [2024-07-22 18:09:52.444383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.190 [2024-07-22 18:09:52.444392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.190 [2024-07-22 18:09:52.444526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.190 [2024-07-22 18:09:52.444627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.190 [2024-07-22 18:09:52.444634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.191 [2024-07-22 18:09:52.444641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.191 [2024-07-22 18:09:52.446778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.191 [2024-07-22 18:09:52.455729] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.191 [2024-07-22 18:09:52.456172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.191 [2024-07-22 18:09:52.456521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.191 [2024-07-22 18:09:52.456531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.191 [2024-07-22 18:09:52.456538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.191 [2024-07-22 18:09:52.456672] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.191 [2024-07-22 18:09:52.456753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.191 [2024-07-22 18:09:52.456759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.191 [2024-07-22 18:09:52.456766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.191 [2024-07-22 18:09:52.458951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.453 [2024-07-22 18:09:52.468082] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.453 [2024-07-22 18:09:52.468385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.468735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.468744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.453 [2024-07-22 18:09:52.468751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.453 [2024-07-22 18:09:52.468883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.453 [2024-07-22 18:09:52.469014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.453 [2024-07-22 18:09:52.469025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.453 [2024-07-22 18:09:52.469032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.453 [2024-07-22 18:09:52.471060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.453 [2024-07-22 18:09:52.480335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.453 [2024-07-22 18:09:52.480756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.480968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.480979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.453 [2024-07-22 18:09:52.480988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.453 [2024-07-22 18:09:52.481138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.453 [2024-07-22 18:09:52.481308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.453 [2024-07-22 18:09:52.481315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.453 [2024-07-22 18:09:52.481322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.453 [2024-07-22 18:09:52.483468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.453 [2024-07-22 18:09:52.492675] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.453 [2024-07-22 18:09:52.493116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.493435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.493446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.453 [2024-07-22 18:09:52.493453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.453 [2024-07-22 18:09:52.493603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.453 [2024-07-22 18:09:52.493769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.453 [2024-07-22 18:09:52.493776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.453 [2024-07-22 18:09:52.493782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.453 [2024-07-22 18:09:52.495676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.453 [2024-07-22 18:09:52.504933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.453 [2024-07-22 18:09:52.505389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.505595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.505604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.453 [2024-07-22 18:09:52.505611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.453 [2024-07-22 18:09:52.505799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.453 [2024-07-22 18:09:52.505948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.453 [2024-07-22 18:09:52.505955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.453 [2024-07-22 18:09:52.505965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.453 [2024-07-22 18:09:52.508033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.453 [2024-07-22 18:09:52.517128] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.453 [2024-07-22 18:09:52.517700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.518019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.453 [2024-07-22 18:09:52.518031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.453 [2024-07-22 18:09:52.518040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.453 [2024-07-22 18:09:52.518208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.453 [2024-07-22 18:09:52.518343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.453 [2024-07-22 18:09:52.518357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.453 [2024-07-22 18:09:52.518364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.453 [2024-07-22 18:09:52.520380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.529389] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.529810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.530013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.530024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.530033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.530184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.530319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.530327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.530334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.532375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.541775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.542196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.542388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.542398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.542405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.542538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.542688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.542696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.542703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.544704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.554092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.554724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.554973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.554984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.554993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.555109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.555211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.555219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.555227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.557453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.566640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.567178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.567505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.567518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.567527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.567660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.567812] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.567820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.567827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.569677] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.579025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.579447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.579785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.579797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.579806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.579974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.580109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.580116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.580124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.582211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.591155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.591773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.592043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.592054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.592063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.592232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.592374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.592384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.592390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.594562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.603578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.603975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.604370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.604383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.454 [2024-07-22 18:09:52.604392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.454 [2024-07-22 18:09:52.604543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.454 [2024-07-22 18:09:52.604678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.454 [2024-07-22 18:09:52.604685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.454 [2024-07-22 18:09:52.604692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.454 [2024-07-22 18:09:52.606844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.454 [2024-07-22 18:09:52.616202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.454 [2024-07-22 18:09:52.616781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.616999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.454 [2024-07-22 18:09:52.617010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.617019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.617203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.617321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.617329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.617336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.619472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.628516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.629003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.629190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.629199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.629206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.629378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.629512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.629520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.629526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.631639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.640962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.641377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.641722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.641734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.641742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.641876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.642027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.642034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.642041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.644062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.653221] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.653647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.653969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.653980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.653989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.654140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.654276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.654284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.654291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.656366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.665751] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.666193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.666526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.666541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.666548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.666682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.666832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.666840] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.666846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.668856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.678095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.678401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.678689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.678698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.678705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.678854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.679003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.679010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.679016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.681145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.690459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.690919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.691119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.691130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.691139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.691272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.691415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.691425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.691432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.693432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.702928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.703541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.703862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.703874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.703887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.704020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.704155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.704162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.704169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.706156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.715220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.455 [2024-07-22 18:09:52.715687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.715733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.455 [2024-07-22 18:09:52.715741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.455 [2024-07-22 18:09:52.715748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.455 [2024-07-22 18:09:52.715881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.455 [2024-07-22 18:09:52.716013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.455 [2024-07-22 18:09:52.716021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.455 [2024-07-22 18:09:52.716027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.455 [2024-07-22 18:09:52.718075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.455 [2024-07-22 18:09:52.727449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.728057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.728450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.728463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.728472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.728607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.728759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.728767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.728773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.719 [2024-07-22 18:09:52.730911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.719 [2024-07-22 18:09:52.739546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.739989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.740303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.740312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.740319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.740495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.740662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.740669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.740675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.719 [2024-07-22 18:09:52.742905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.719 [2024-07-22 18:09:52.751913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.752483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.752647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.752659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.752668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.752785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.752886] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.752894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.752901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.719 [2024-07-22 18:09:52.755060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.719 [2024-07-22 18:09:52.764175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.764742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.765067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.765079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.765089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.765221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.765364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.765373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.765380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.719 [2024-07-22 18:09:52.767546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.719 [2024-07-22 18:09:52.776274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.776819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.777019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.777033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.777042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.777209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.777315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.777323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.777330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.719 [2024-07-22 18:09:52.779356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.719 [2024-07-22 18:09:52.788551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.789041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.789380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.789394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.789403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.789519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.789655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.789662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.789669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.719 [2024-07-22 18:09:52.791874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.719 [2024-07-22 18:09:52.801174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.719 [2024-07-22 18:09:52.801637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.802032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.719 [2024-07-22 18:09:52.802044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.719 [2024-07-22 18:09:52.802053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.719 [2024-07-22 18:09:52.802186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.719 [2024-07-22 18:09:52.802321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.719 [2024-07-22 18:09:52.802329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.719 [2024-07-22 18:09:52.802336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.804289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.813382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.813913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.814236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.814248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.814257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.814380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.814516] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.814528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.814535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.816637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.825827] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.826283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.826489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.826501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.826509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.826591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.826723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.826730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.826737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.828992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.838161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.838670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.838999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.839011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.839020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.839188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.839306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.839314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.839321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.841290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.850469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.850957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.851270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.851279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.851287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.851443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.851559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.851567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.851580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.853748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.862892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.863267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.863635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.863648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.863657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.863825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.863995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.864004] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.864011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.866298] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.875252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.875703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.876008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.876018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.876025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.876158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.876290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.876298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.876304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.878524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.887524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.887905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.888225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.888237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.888246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.888438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.888574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.888581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.888588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.890777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.899662] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.899959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.900284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.900294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.720 [2024-07-22 18:09:52.900301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.720 [2024-07-22 18:09:52.900440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.720 [2024-07-22 18:09:52.900573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.720 [2024-07-22 18:09:52.900580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.720 [2024-07-22 18:09:52.900587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.720 [2024-07-22 18:09:52.902700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.720 [2024-07-22 18:09:52.911990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.720 [2024-07-22 18:09:52.912538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.912858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.720 [2024-07-22 18:09:52.912870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.912879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.913047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.913183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.913191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.913198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.915180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.721 [2024-07-22 18:09:52.924193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.721 [2024-07-22 18:09:52.924673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.924967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.924976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.924983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.925115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.925247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.925255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.925262] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.927398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.721 [2024-07-22 18:09:52.936576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.721 [2024-07-22 18:09:52.937150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.937405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.937418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.937427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.937562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.937697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.937706] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.937712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.939834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.721 [2024-07-22 18:09:52.948864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.721 [2024-07-22 18:09:52.949259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.949452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.949466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.949474] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.949642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.949744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.949752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.949759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.951679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.721 [2024-07-22 18:09:52.961290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.721 [2024-07-22 18:09:52.961850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.962171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.962183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.962192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.962343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.962509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.962517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.962524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.964493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.721 [2024-07-22 18:09:52.973696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.721 [2024-07-22 18:09:52.974244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.974562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.974577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.974586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.974736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.974821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.974829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.974836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.976774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.721 [2024-07-22 18:09:52.985917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.721 [2024-07-22 18:09:52.986451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.986925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.721 [2024-07-22 18:09:52.986937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.721 [2024-07-22 18:09:52.986946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.721 [2024-07-22 18:09:52.987080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.721 [2024-07-22 18:09:52.987181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.721 [2024-07-22 18:09:52.987188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.721 [2024-07-22 18:09:52.987195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.721 [2024-07-22 18:09:52.989476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 [2024-07-22 18:09:52.998479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:52.998817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:52.998986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:52.998998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.984 [2024-07-22 18:09:52.999005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.984 [2024-07-22 18:09:52.999156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.984 [2024-07-22 18:09:52.999305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.984 [2024-07-22 18:09:52.999313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.984 [2024-07-22 18:09:52.999319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.984 [2024-07-22 18:09:53.001372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 [2024-07-22 18:09:53.010997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:53.011449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.011764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.011777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.984 [2024-07-22 18:09:53.011785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.984 [2024-07-22 18:09:53.011951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.984 [2024-07-22 18:09:53.012066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.984 [2024-07-22 18:09:53.012073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.984 [2024-07-22 18:09:53.012080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.984 [2024-07-22 18:09:53.014178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 18:09:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:48.984 18:09:53 -- common/autotest_common.sh@852 -- # return 0 00:32:48.984 18:09:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:48.984 18:09:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:48.984 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:48.984 [2024-07-22 18:09:53.023393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:53.023768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.024052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.024061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.984 [2024-07-22 18:09:53.024067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.984 [2024-07-22 18:09:53.024182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.984 [2024-07-22 18:09:53.024331] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.984 [2024-07-22 18:09:53.024338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.984 [2024-07-22 18:09:53.024346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.984 [2024-07-22 18:09:53.026397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 [2024-07-22 18:09:53.035611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:53.035906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.036130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.036139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.984 [2024-07-22 18:09:53.036146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.984 [2024-07-22 18:09:53.036277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.984 [2024-07-22 18:09:53.036448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.984 [2024-07-22 18:09:53.036456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.984 [2024-07-22 18:09:53.036462] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.984 [2024-07-22 18:09:53.038373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 [2024-07-22 18:09:53.047928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:53.048442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.048741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.048749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.984 [2024-07-22 18:09:53.048756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.984 [2024-07-22 18:09:53.048888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.984 [2024-07-22 18:09:53.049020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.984 [2024-07-22 18:09:53.049027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.984 [2024-07-22 18:09:53.049034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.984 [2024-07-22 18:09:53.050977] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 18:09:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:48.984 18:09:53 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:48.984 18:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.984 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:48.984 [2024-07-22 18:09:53.060312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:53.060889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.061188] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:48.984 [2024-07-22 18:09:53.061209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.061220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.984 [2024-07-22 18:09:53.061229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.984 [2024-07-22 18:09:53.061405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.984 [2024-07-22 18:09:53.061524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.984 [2024-07-22 18:09:53.061531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.984 [2024-07-22 18:09:53.061538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.984 [2024-07-22 18:09:53.063651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.984 18:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.984 18:09:53 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:48.984 18:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.984 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:48.984 [2024-07-22 18:09:53.072676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.984 [2024-07-22 18:09:53.073133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.073469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.984 [2024-07-22 18:09:53.073483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.985 [2024-07-22 18:09:53.073492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.985 [2024-07-22 18:09:53.073591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.985 [2024-07-22 18:09:53.073777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.985 [2024-07-22 18:09:53.073788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.985 [2024-07-22 18:09:53.073796] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.985 [2024-07-22 18:09:53.075866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.985 [2024-07-22 18:09:53.084879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.985 [2024-07-22 18:09:53.085275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.085432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.085445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.985 [2024-07-22 18:09:53.085454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.985 [2024-07-22 18:09:53.085623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.985 [2024-07-22 18:09:53.085760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.985 [2024-07-22 18:09:53.085767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.985 [2024-07-22 18:09:53.085774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.985 [2024-07-22 18:09:53.087930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.985 Malloc0 00:32:48.985 18:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.985 18:09:53 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:48.985 18:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.985 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:48.985 [2024-07-22 18:09:53.097162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.985 [2024-07-22 18:09:53.097505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.097828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.097838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.985 [2024-07-22 18:09:53.097845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.985 [2024-07-22 18:09:53.097961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.985 [2024-07-22 18:09:53.098110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.985 [2024-07-22 18:09:53.098118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.985 [2024-07-22 18:09:53.098124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.985 [2024-07-22 18:09:53.100255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.985 18:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.985 18:09:53 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.985 18:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.985 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:48.985 [2024-07-22 18:09:53.109407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.985 [2024-07-22 18:09:53.109850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.110161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.110171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.985 [2024-07-22 18:09:53.110186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.985 [2024-07-22 18:09:53.110285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.985 [2024-07-22 18:09:53.110355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.985 [2024-07-22 18:09:53.110363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.985 [2024-07-22 18:09:53.110369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.985 [2024-07-22 18:09:53.112400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.985 18:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.985 18:09:53 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.985 18:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:48.985 18:09:53 -- common/autotest_common.sh@10 -- # set +x 00:32:48.985 [2024-07-22 18:09:53.121587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.985 [2024-07-22 18:09:53.121890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.122182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:48.985 [2024-07-22 18:09:53.122191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11753b0 with addr=10.0.0.2, port=4420 00:32:48.985 [2024-07-22 18:09:53.122197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11753b0 is same with the state(5) to be set 00:32:48.985 [2024-07-22 18:09:53.122367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11753b0 (9): Bad file descriptor 00:32:48.985 [2024-07-22 18:09:53.122534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:32:48.985 [2024-07-22 18:09:53.122542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:32:48.985 [2024-07-22 18:09:53.122549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:48.985 [2024-07-22 18:09:53.124528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:48.985 [2024-07-22 18:09:53.125219] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.985 18:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:48.985 18:09:53 -- host/bdevperf.sh@38 -- # wait 1882796 00:32:48.985 [2024-07-22 18:09:53.133800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:48.985 [2024-07-22 18:09:53.208740] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:58.985 00:32:58.985 Latency(us) 00:32:58.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.985 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:58.985 Verification LBA range: start 0x0 length 0x4000 00:32:58.985 Nvme1n1 : 15.00 11069.59 43.24 16516.64 0.00 4626.32 831.80 16333.59 00:32:58.985 =================================================================================================================== 00:32:58.985 Total : 11069.59 43.24 16516.64 0.00 4626.32 831.80 16333.59 00:32:58.985 18:10:01 -- host/bdevperf.sh@39 -- # sync 00:32:58.985 18:10:01 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.985 18:10:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.985 18:10:01 -- common/autotest_common.sh@10 -- # set +x 00:32:58.985 18:10:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.986 18:10:01 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:32:58.986 18:10:01 -- host/bdevperf.sh@44 -- # nvmftestfini 00:32:58.986 18:10:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:58.986 18:10:01 -- nvmf/common.sh@116 -- # sync 00:32:58.986 18:10:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:58.986 18:10:01 -- nvmf/common.sh@119 -- # set +e 00:32:58.986 18:10:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:58.986 18:10:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:58.986 rmmod nvme_tcp 00:32:58.986 rmmod nvme_fabrics 00:32:58.986 rmmod nvme_keyring 00:32:58.986 18:10:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:58.986 18:10:01 -- nvmf/common.sh@123 -- # set -e 00:32:58.986 18:10:01 -- nvmf/common.sh@124 -- # return 0 00:32:58.986 18:10:01 -- nvmf/common.sh@477 -- # '[' -n 1883736 ']' 00:32:58.986 18:10:01 -- nvmf/common.sh@478 -- # killprocess 1883736 00:32:58.986 18:10:01 -- common/autotest_common.sh@926 -- # '[' -z 1883736 ']' 00:32:58.986 18:10:01 -- common/autotest_common.sh@930 -- # kill -0 1883736 00:32:58.986 18:10:01 -- common/autotest_common.sh@931 -- # uname 00:32:58.986 18:10:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:58.986 18:10:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1883736 00:32:58.986 18:10:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:32:58.986 18:10:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:32:58.986 18:10:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1883736' 00:32:58.986 killing process with pid 1883736 00:32:58.986 18:10:01 -- common/autotest_common.sh@945 -- # kill 1883736 00:32:58.986 18:10:01 -- common/autotest_common.sh@950 -- # wait 1883736 00:32:58.986 18:10:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:58.986 18:10:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:58.986 18:10:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:58.986 18:10:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:58.986 18:10:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:58.986 18:10:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:58.986 18:10:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:58.986 18:10:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.926 18:10:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:59.926 00:32:59.926 real 0m28.730s 00:32:59.926 user 1m3.975s 00:32:59.926 sys 0m7.699s 00:32:59.926 18:10:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:59.926 18:10:04 -- common/autotest_common.sh@10 -- # set +x 00:32:59.926 ************************************ 00:32:59.926 END TEST nvmf_bdevperf 00:32:59.926 ************************************ 00:32:59.926 18:10:04 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:32:59.926 18:10:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:32:59.926 18:10:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:59.926 18:10:04 -- common/autotest_common.sh@10 -- # set +x 00:32:59.926 ************************************ 00:32:59.926 START TEST nvmf_target_disconnect 00:32:59.927 ************************************ 00:32:59.927 18:10:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:00.187 * Looking for test storage... 00:33:00.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:00.187 18:10:04 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:00.187 18:10:04 -- nvmf/common.sh@7 -- # uname -s 00:33:00.187 18:10:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:00.187 18:10:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:00.187 18:10:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:00.187 18:10:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:00.187 18:10:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:00.187 18:10:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:00.187 18:10:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:00.187 18:10:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:00.188 18:10:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:00.188 18:10:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:00.188 18:10:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:00.188 18:10:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:00.188 18:10:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:00.188 18:10:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:00.188 18:10:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:00.188 18:10:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:00.188 18:10:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:00.188 18:10:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:00.188 18:10:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:00.188 18:10:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.188 18:10:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.188 18:10:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.188 18:10:04 -- paths/export.sh@5 -- # export PATH 00:33:00.188 18:10:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:00.188 18:10:04 -- nvmf/common.sh@46 -- # : 0 00:33:00.188 18:10:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:00.188 18:10:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:00.188 18:10:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:00.188 18:10:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:00.188 18:10:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:00.188 18:10:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:00.188 18:10:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:00.188 18:10:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:00.188 18:10:04 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:00.188 18:10:04 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:00.188 18:10:04 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:00.188 18:10:04 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:33:00.188 18:10:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:00.188 18:10:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:00.188 18:10:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:00.188 18:10:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:00.188 18:10:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:00.188 18:10:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.188 18:10:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:00.188 18:10:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:00.188 18:10:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:00.188 18:10:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:00.188 18:10:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:00.188 18:10:04 -- common/autotest_common.sh@10 -- # set +x 00:33:08.328 18:10:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:08.328 18:10:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:08.328 18:10:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:08.328 18:10:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:08.328 18:10:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:08.328 18:10:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:08.328 18:10:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:08.328 18:10:12 -- nvmf/common.sh@294 -- # net_devs=() 00:33:08.328 18:10:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:08.328 18:10:12 -- nvmf/common.sh@295 -- # e810=() 00:33:08.328 18:10:12 -- nvmf/common.sh@295 -- # local -ga e810 00:33:08.328 18:10:12 -- nvmf/common.sh@296 -- # x722=() 00:33:08.328 18:10:12 -- nvmf/common.sh@296 -- # local -ga x722 00:33:08.328 18:10:12 -- nvmf/common.sh@297 -- # mlx=() 00:33:08.328 18:10:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:08.328 18:10:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.328 18:10:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:08.328 18:10:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:08.328 18:10:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:08.328 18:10:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:08.328 18:10:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:08.328 18:10:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:08.328 18:10:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:08.328 18:10:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:08.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:08.328 18:10:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:08.328 18:10:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:08.328 18:10:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:08.329 18:10:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:08.329 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:08.329 18:10:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:08.329 18:10:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:08.329 18:10:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.329 18:10:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:08.329 18:10:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.329 18:10:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:08.329 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:08.329 18:10:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.329 18:10:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:08.329 18:10:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.329 18:10:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:08.329 18:10:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.329 18:10:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:08.329 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:08.329 18:10:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.329 18:10:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:08.329 18:10:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:08.329 18:10:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:08.329 18:10:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.329 18:10:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.329 18:10:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.329 18:10:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:08.329 18:10:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.329 18:10:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.329 18:10:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:08.329 18:10:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.329 18:10:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.329 18:10:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:08.329 18:10:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:08.329 18:10:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.329 18:10:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.329 18:10:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.329 18:10:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.329 18:10:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:08.329 18:10:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.329 18:10:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.329 18:10:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.329 18:10:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:08.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:33:08.329 00:33:08.329 --- 10.0.0.2 ping statistics --- 00:33:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.329 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:33:08.329 18:10:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:33:08.329 00:33:08.329 --- 10.0.0.1 ping statistics --- 00:33:08.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.329 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:33:08.329 18:10:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.329 18:10:12 -- nvmf/common.sh@410 -- # return 0 00:33:08.329 18:10:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:08.329 18:10:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:08.329 18:10:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:08.329 18:10:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:08.329 18:10:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:08.329 18:10:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:08.329 18:10:12 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:08.329 18:10:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:08.329 18:10:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:08.329 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.329 ************************************ 00:33:08.329 START TEST nvmf_target_disconnect_tc1 00:33:08.329 ************************************ 00:33:08.329 18:10:12 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:33:08.329 18:10:12 -- host/target_disconnect.sh@32 -- # set +e 00:33:08.329 18:10:12 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:08.329 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.591 [2024-07-22 18:10:12.643824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.591 [2024-07-22 18:10:12.644182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:08.591 [2024-07-22 18:10:12.644200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16ed330 with addr=10.0.0.2, port=4420 00:33:08.591 [2024-07-22 18:10:12.644242] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:08.591 [2024-07-22 18:10:12.644258] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:08.591 [2024-07-22 18:10:12.644266] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:08.591 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:08.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:08.591 Initializing NVMe Controllers 00:33:08.591 18:10:12 -- host/target_disconnect.sh@33 -- # trap - ERR 00:33:08.591 18:10:12 -- host/target_disconnect.sh@33 -- # print_backtrace 00:33:08.591 18:10:12 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:33:08.591 18:10:12 -- common/autotest_common.sh@1132 -- # return 0 00:33:08.591 18:10:12 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:33:08.591 18:10:12 -- host/target_disconnect.sh@41 -- # set -e 00:33:08.591 00:33:08.591 real 0m0.130s 00:33:08.591 user 0m0.051s 00:33:08.591 sys 0m0.077s 00:33:08.591 18:10:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.591 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.591 ************************************ 00:33:08.591 END TEST nvmf_target_disconnect_tc1 00:33:08.591 ************************************ 00:33:08.591 18:10:12 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:08.591 18:10:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:08.591 18:10:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:08.591 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.591 ************************************ 00:33:08.591 START TEST nvmf_target_disconnect_tc2 00:33:08.591 ************************************ 00:33:08.591 18:10:12 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:33:08.591 18:10:12 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:33:08.591 18:10:12 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:08.591 18:10:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:08.591 18:10:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:08.591 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.591 18:10:12 -- nvmf/common.sh@469 -- # nvmfpid=1889824 00:33:08.591 18:10:12 -- nvmf/common.sh@470 -- # waitforlisten 1889824 00:33:08.591 18:10:12 -- common/autotest_common.sh@819 -- # '[' -z 1889824 ']' 00:33:08.591 18:10:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.591 18:10:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:08.591 18:10:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.591 18:10:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:08.591 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:33:08.591 18:10:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:08.591 [2024-07-22 18:10:12.761021] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:08.591 [2024-07-22 18:10:12.761086] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:08.591 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.854 [2024-07-22 18:10:12.900745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:08.854 [2024-07-22 18:10:13.068397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:08.854 [2024-07-22 18:10:13.068732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:08.854 [2024-07-22 18:10:13.068756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:08.855 [2024-07-22 18:10:13.068777] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:08.855 [2024-07-22 18:10:13.068986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:08.855 [2024-07-22 18:10:13.069141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:08.855 [2024-07-22 18:10:13.069294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:08.855 [2024-07-22 18:10:13.069300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:09.430 18:10:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:09.430 18:10:13 -- common/autotest_common.sh@852 -- # return 0 00:33:09.430 18:10:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:09.430 18:10:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:09.430 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.430 18:10:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:09.430 18:10:13 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:09.430 18:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.430 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.430 Malloc0 00:33:09.430 18:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.430 18:10:13 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:09.430 18:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.430 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.430 [2024-07-22 18:10:13.676016] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.430 18:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.430 18:10:13 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:09.430 18:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.430 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.430 18:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.430 18:10:13 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:09.430 18:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.430 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.430 18:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.430 18:10:13 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:09.430 18:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.430 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.430 [2024-07-22 18:10:13.705007] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:09.691 18:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.691 18:10:13 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:09.691 18:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:09.691 18:10:13 -- common/autotest_common.sh@10 -- # set +x 00:33:09.691 18:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:09.691 18:10:13 -- host/target_disconnect.sh@50 -- # reconnectpid=1889878 00:33:09.691 18:10:13 -- host/target_disconnect.sh@52 -- # sleep 2 00:33:09.691 18:10:13 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:09.691 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.618 18:10:15 -- host/target_disconnect.sh@53 -- # kill -9 1889824 00:33:11.618 18:10:15 -- host/target_disconnect.sh@55 -- # sleep 2 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Write completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Write completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Write completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Write completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Write completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Write completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 Read completed with error (sct=0, sc=8) 00:33:11.618 starting I/O failed 00:33:11.618 [2024-07-22 18:10:15.740059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:11.618 [2024-07-22 18:10:15.740609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.740951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.740964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.618 [2024-07-22 18:10:15.741301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.741661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.741704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.618 [2024-07-22 18:10:15.742013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.742225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.742235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.618 [2024-07-22 18:10:15.742568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.742866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.742880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.618 [2024-07-22 18:10:15.743157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.743515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.743524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.618 [2024-07-22 18:10:15.743829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.744128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.744137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.618 [2024-07-22 18:10:15.744461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.744784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.618 [2024-07-22 18:10:15.744794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.618 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.744961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.745153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.745163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.745504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.745719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.745728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.746063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.746222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.746233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.747057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.747382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.747392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.747474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.747789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.747798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.748042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.748312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.748323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.748630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.748862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.748872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.749212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.749507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.749517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.749822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.750134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.750144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.750461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.750754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.750766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.751070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.751382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.751393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.751775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.752108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.752116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.752427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.752715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.752724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.752949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.753319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.753329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.753658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.753974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.753983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.754293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.756730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.756772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.757079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.757539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.757581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.757957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.758137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.758146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.758501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.758729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.758739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.759054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.759309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.759318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.759648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.759940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.759949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.760260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.760585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.760596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.760911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.761231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.761240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.761474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.761654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.761662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.761978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.762301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.762311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.762636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.762874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.762883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.763203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.763532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.763541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.763818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.764101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.764110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.619 qpair failed and we were unable to recover it. 00:33:11.619 [2024-07-22 18:10:15.764463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.619 [2024-07-22 18:10:15.764746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.764756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.765004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.765267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.765276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.765399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.765719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.765728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.766065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.766432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.766443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.766627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.766822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.766831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.767030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.767225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.767233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.767451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.767710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.767720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.768016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.768311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.768323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.768706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.769007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.769019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.769320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.769536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.769548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.769886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.770218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.770229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.770611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.770954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.770966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.771295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.771603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.771615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.771929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.772094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.772109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.772424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.772743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.772754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.772968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.773182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.773194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.773511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.773846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.773857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.774203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.774547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.774559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.774900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.775223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.775234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.775551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.775745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.775758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.776152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.776345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.776366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.776600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.776951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.776962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.777233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.777585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.777597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.777903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.778192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.778203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.778465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.778784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.778795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.779118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.779360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.779372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.779709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.780022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.780037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.780333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.780698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.780714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.620 qpair failed and we were unable to recover it. 00:33:11.620 [2024-07-22 18:10:15.781054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.781395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.620 [2024-07-22 18:10:15.781411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.781771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.782083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.782099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.782412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.782736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.782751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.782996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.783320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.783335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.783634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.783987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.784003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.784213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.784575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.784591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.784875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.785198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.785213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.785432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.785723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.785738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.785944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.786254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.786269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.786533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.786768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.786783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.787056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.787389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.787404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.787697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.788016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.788031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.788340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.788609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.788624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.788906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.789181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.789197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.789422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.789685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.789701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.789896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.790201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.790216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.790450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.790659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.790676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.791025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.791328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.791344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.791685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.792030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.792045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.792330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.792549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.792571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.792882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.793216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.793238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.793623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.794015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.794037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.794337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.794680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.794702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.795070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.795415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.795437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.795776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.796083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.796104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.796342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.796718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.796739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.797084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.797316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.797337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.797574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.797768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.797789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.798127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.798389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.798412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.621 qpair failed and we were unable to recover it. 00:33:11.621 [2024-07-22 18:10:15.798670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.621 [2024-07-22 18:10:15.799007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.799028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.799243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.799485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.799507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.799853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.800213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.800234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.800492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.800818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.800839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.801174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.801412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.801434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.801793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.802155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.802177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.802369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.802739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.802761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.803093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.803441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.803468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.803723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.804072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.804093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.804441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.804811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.804832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.805159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.805493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.805515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.805851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.806191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.806212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.806460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.806816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.806838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.806999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.807378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.807400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.807629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.807978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.808005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.808365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.808749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.808776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.809142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.809398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.809425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.809763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.810000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.810026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.810266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.810601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.810630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.810857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.811101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.811126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.811490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.811834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.811860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.812214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.812423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.812450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.812805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.813141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.813168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.813555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.813897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.813923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.814301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.814627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.814655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.815000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.815241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.815268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.815428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.815651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.815677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.816060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.816296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.816323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.622 qpair failed and we were unable to recover it. 00:33:11.622 [2024-07-22 18:10:15.816724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.817028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.622 [2024-07-22 18:10:15.817055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.817197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.817467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.817495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.817808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.818114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.818140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.818519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.818857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.818883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.819200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.819514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.819543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.819806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.820131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.820158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.820447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.820775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.820801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.821139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.821412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.821438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.821784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.822111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.822137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.822455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.822764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.822791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.823126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.823369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.823397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.823753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.824068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.824095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.824428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.824687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.824713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.825071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.825416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.825444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.825717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.826068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.826094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.826443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.826665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.826691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.826838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.827182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.827208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.827454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.827665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.827692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.827969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.828269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.828296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.828652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.828880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.828907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.829275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.829625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.829653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.830045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.830381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.830408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.623 [2024-07-22 18:10:15.830777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.831071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.623 [2024-07-22 18:10:15.831098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.623 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.831441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.831779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.831806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.832036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.832359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.832386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.832679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.832985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.833012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.833390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.833759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.833785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.834138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.834414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.834441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.834773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.835072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.835098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.835445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.835794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.835820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.836157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.836501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.836534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.836884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.837227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.837253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.837637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.837966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.837993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.838367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.838738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.838765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.839120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.839368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.839395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.839716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.840039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.840066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.840193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.840448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.840476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.840821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.841137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.841162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.841415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.841569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.841596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.841893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.842220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.842247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.842376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.842638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.842664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.842903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.843159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.843187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.843558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.843898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.843924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.844267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.844585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.844612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.844960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.845311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.845337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.845663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.845851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.845877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.846225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.846617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.846644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.846874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.847235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.847261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.847502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.847910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.847936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.848241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.848641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.848669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.848805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.849212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.849240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.624 qpair failed and we were unable to recover it. 00:33:11.624 [2024-07-22 18:10:15.849587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.624 [2024-07-22 18:10:15.849899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.849926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.850213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.850612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.850639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.850960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.851229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.851257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.851668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.851993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.852021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.852257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.852374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.852401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.852742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.853082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.853109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.853460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.853830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.853856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.854186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.854505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.854532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.854810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.855134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.855161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.855383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.855831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.855857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.856207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.856448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.856475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.856837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.857161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.857187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.857452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.857698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.857724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.858081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.858430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.858457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.858700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.858903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.858929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.859251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.859486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.859514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.859853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.860153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.860181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.860589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.860817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.860843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.861217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.861625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.861653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.862061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.862291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.862325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.862609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.862936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.862963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.863287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.863560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.863594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.863861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.864076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.864103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.864504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.864851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.864878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.865229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.865569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.865596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.865893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.866242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.866268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.866650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.867026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.867053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.867426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.867778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.867804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.868151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.868478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.625 [2024-07-22 18:10:15.868505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.625 qpair failed and we were unable to recover it. 00:33:11.625 [2024-07-22 18:10:15.868857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.869220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.869247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.869534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.869759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.869791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.870204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.870481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.870509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.870737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.871075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.871101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.871242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.871393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.871420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.871795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.872110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.872137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.872393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.872752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.872778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.873125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.873440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.873466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.873808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.874128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.874154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.874496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.874859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.874885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.875120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.875411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.875439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.875720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.876074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.876107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.876434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.876795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.876821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.877052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.877337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.877372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.877791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.878128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.878154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.878408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.878748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.878774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.879092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.879409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.879437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.879686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.880057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.880084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.880408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.880754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.880780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.881125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.881344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.881377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.881702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.881990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.882016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.882322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.882559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.882586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.882979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.883305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.883332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.883594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.883902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.883929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.884153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.884454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.884483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.884829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.885149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.885175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.885495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.885834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.885860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.886013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.886397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.886424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.626 qpair failed and we were unable to recover it. 00:33:11.626 [2024-07-22 18:10:15.886809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.626 [2024-07-22 18:10:15.887143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.887170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.627 qpair failed and we were unable to recover it. 00:33:11.627 [2024-07-22 18:10:15.887394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.887652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.887678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.627 qpair failed and we were unable to recover it. 00:33:11.627 [2024-07-22 18:10:15.888050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.888375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.888404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.627 qpair failed and we were unable to recover it. 00:33:11.627 [2024-07-22 18:10:15.888679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.889033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.627 [2024-07-22 18:10:15.889060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.627 qpair failed and we were unable to recover it. 00:33:11.627 [2024-07-22 18:10:15.889303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.895 [2024-07-22 18:10:15.889650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.895 [2024-07-22 18:10:15.889679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.895 qpair failed and we were unable to recover it. 00:33:11.895 [2024-07-22 18:10:15.889998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.895 [2024-07-22 18:10:15.890378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.895 [2024-07-22 18:10:15.890406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.895 qpair failed and we were unable to recover it. 00:33:11.895 [2024-07-22 18:10:15.890657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.890960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.890987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.891302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.891627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.891654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.891968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.892213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.892240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.892491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.892861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.892887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.893228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.893488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.893515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.893773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.894114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.894140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.894362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.894815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.894841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.895093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.895438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.895466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.895808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.896161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.896188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.896509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.896835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.896862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.897117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.897445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.897473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.897821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.898152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.898179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.898529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.898869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.898897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.899239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.899580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.899608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.899906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.900232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.900259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.900649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.900905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.900931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.901295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.901642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.901670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.902016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.902362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.902390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.902764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.903076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.903103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.903471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.903814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.903842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.904265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.904418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.904447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.904764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.904988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.905014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.905267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.905587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.905615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.905951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.906265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.906293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.906734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.907105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.907134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.907381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.907697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.907722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.908009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.908314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.908341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.908700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.909021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.909048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.896 qpair failed and we were unable to recover it. 00:33:11.896 [2024-07-22 18:10:15.909370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.896 [2024-07-22 18:10:15.909516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.909549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.909909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.910245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.910272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.910601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.910931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.910958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.911275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.911578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.911607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.911838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.912178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.912205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.912530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.912860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.912888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.913205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.913436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.913464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.913814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.914137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.914163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.914487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.914832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.914859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.915202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.915521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.915550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.915847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.916068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.916099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.916387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.916729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.916756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.917065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.917395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.917422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.917791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.918008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.918035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.918336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.918682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.918711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.919038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.919386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.919414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.919773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.919895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.919927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.920248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.920568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.920597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.920942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.921252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.921279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.921665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.921893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.921921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.922230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.922465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.922492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.922709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.923062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.923089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.923391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.923811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.923838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.924210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.924525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.924553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.924885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.925219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.925246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.925578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.925908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.925935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.926166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.926510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.926537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.926876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.927207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.927240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.927579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.927924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.897 [2024-07-22 18:10:15.927950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.897 qpair failed and we were unable to recover it. 00:33:11.897 [2024-07-22 18:10:15.928328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.928685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.928713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.929057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.929379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.929407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.929741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.930072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.930099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.930368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.930686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.930711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.931028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.931389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.931417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.931784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.932117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.932144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.932469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.932791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.932817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.933206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.933527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.933555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.933903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.934245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.934271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.934628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.934959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.934986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.935327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.935674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.935701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.936044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.936347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.936382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.936747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.937092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.937119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.937447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.937740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.937767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.938104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.938422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.938449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.938776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.939143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.939169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.939490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.939861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.939887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.940207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.940530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.940557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.940862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.941181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.941207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.941535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.941864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.941890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.942218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.942566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.942593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.942919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.943270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.943296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.943638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.943936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.943972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.944324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.944667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.944695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.945029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.945338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.945375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.945756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.946048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.946075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.946428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.946778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.946805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.947122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.947474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.947501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.898 qpair failed and we were unable to recover it. 00:33:11.898 [2024-07-22 18:10:15.947824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.898 [2024-07-22 18:10:15.948160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.948186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.948526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.948728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.948754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.949120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.949368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.949394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.949754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.950079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.950105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.950346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.950705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.950731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.951067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.951396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.951424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.951759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.952095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.952121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.952377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.952710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.952736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.953128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.953392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.953418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.953770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.954083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.954110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.954427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.954748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.954774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.955001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.955228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.955254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.955575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.955907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.955934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.956281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.956623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.956650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.956992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.957316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.957343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.957635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.957963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.957990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.958331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.958716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.958743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.959070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.959407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.959435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.959773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.960101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.960127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.960470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.960789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.960816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.961046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.961399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.961426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.961797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.962146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.962172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.962499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.962787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.962813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.963149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.963479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.963507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.963737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.964038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.964065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.964400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.964745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.964772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.965183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.965447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.965474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.965880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.966313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.966339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.899 [2024-07-22 18:10:15.966767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.967088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.899 [2024-07-22 18:10:15.967115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.899 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.967324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.967656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.967683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.968025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.968414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.968443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.968806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.969158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.969184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.969558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.969925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.969951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.970255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.970616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.970643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.970988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.971336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.971384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.971771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.972132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.972158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.972502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.972874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.972900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.973313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.973679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.973706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.974031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.974368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.974395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.974649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.974916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.974942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.975304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.975679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.975706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.976057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.976272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.976298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.976605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.976807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.976834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.977233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.977616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.977643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.977952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.978186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.978211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.978500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.978922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.978955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.979196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.979445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.979472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.900 qpair failed and we were unable to recover it. 00:33:11.900 [2024-07-22 18:10:15.979844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.980187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.900 [2024-07-22 18:10:15.980213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.980554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.980918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.980944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.981191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.981531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.981559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.981902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.982254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.982281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.982643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.982878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.982904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.983141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.983296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.983320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.983666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.983929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.983955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.984320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.984671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.984698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.985024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.985388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.985422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.985744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.986002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.986028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.986272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.986498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.986525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.986825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.987189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.987215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.987559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.987910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.987936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.988246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.988651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.988678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.989009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.989218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.989244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.989628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.989983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.990009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.990386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.990777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.990803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.991186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.991393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.991421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.991584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.991870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.991897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.992149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.992477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.992506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.992897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.993246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.993273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.993629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.993966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.993993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.994345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.994723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.994750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.995095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.995427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.995454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.995708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.996061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.996088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.996437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.996781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.996808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.997154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.997408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.997436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.997816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.998151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.998178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.998541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.998896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.998924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.901 qpair failed and we were unable to recover it. 00:33:11.901 [2024-07-22 18:10:15.999299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.901 [2024-07-22 18:10:15.999725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:15.999753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.000111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.000397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.000426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.000798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.001130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.001157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.001519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.001907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.001935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.002340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.002732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.002760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.003120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.003429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.003457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.003818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.004063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.004089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.004453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.004721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.004748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.005111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.005468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.005497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.005743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.005969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.005996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.006330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.006662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.006690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.007054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.007400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.007428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.007768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.008130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.008157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.008498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.008687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.008713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.009076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.009416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.009445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.009715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.009973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.010000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.010229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.010584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.010611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.010972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.011330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.011364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.011735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.012048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.012076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.012437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.012783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.012810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.013152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.013505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.013533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.013915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.014304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.014331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.014578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.014950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.014976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.015217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.015447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.015475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.015846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.016184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.016211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.016540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.016889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.016917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.017332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.017715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.017744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.017976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.018210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.018238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.902 qpair failed and we were unable to recover it. 00:33:11.902 [2024-07-22 18:10:16.018473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.018772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.902 [2024-07-22 18:10:16.018800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.019042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.019343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.019379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.019727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.020032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.020065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.020413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.020747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.020774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.021121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.021435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.021463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.021816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.022157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.022184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.022531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.022764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.022791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.023173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.023522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.023551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.023770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.024098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.024124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.024370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.024690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.024717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.025048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.025422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.025450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.025827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.026036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.026066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.026496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.026813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.026840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.027175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.027497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.027525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.027779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.028122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.028148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.028454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.028805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.028831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.029162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.029394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.029425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.029778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.030121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.030148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.030468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.030834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.030860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.031211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.031577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.031606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.031928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.032270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.032297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.032703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.033076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.033103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.033449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.033813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.033841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.034199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.034578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.034606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.034948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.035222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.035249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.035548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.035808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.035835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.036193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.036533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.036561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.036944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.037155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.037181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.037503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.037873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.037900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.903 qpair failed and we were unable to recover it. 00:33:11.903 [2024-07-22 18:10:16.038248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.903 [2024-07-22 18:10:16.038575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.038604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.038955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.039303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.039330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.039608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.039832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.039859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.040229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.040567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.040595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.040838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.041169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.041196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.041444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.041802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.041829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.042150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.042517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.042545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.042892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.043236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.043262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.043592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.043939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.043966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.044308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.044659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.044687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.045027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.045254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.045281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.045620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.045937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.045964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.046345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.046675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.046702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.047039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.047267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.047297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.047658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.048009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.048037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.048399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.048759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.048786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.049121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.049327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.049499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.049861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.050181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.050207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.050553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.050759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.050785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.051148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.051488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.051515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.051854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.052204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.052232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.052557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.052912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.052939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.053243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.053616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.053644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.053990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.054330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.054366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.054700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.055004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.055038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.055387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.055725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.055751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.056114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.056459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.056488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.056836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.057181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.057208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.057568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.057915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.057942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.904 qpair failed and we were unable to recover it. 00:33:11.904 [2024-07-22 18:10:16.058281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.904 [2024-07-22 18:10:16.058629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.058656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.059014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.059326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.059360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.059681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.060029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.060056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.060416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.060757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.060783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.061132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.061480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.061508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.061742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.062079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.062105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.062442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.062812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.062838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.063077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.063457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.905 [2024-07-22 18:10:16.063486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.905 qpair failed and we were unable to recover it. 00:33:11.905 [2024-07-22 18:10:16.063697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.064010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.064037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.064392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.064743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.064770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.065132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.065443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.065471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.065839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.066207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.066234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.066569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.066934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.066961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.067314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.067666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.067694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.068062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.068421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.068449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.068816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.069131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.069158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.069526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.069870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.069897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.070252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.070582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.070610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.070945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.071285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.071312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.071641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.071986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.072013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.072370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.072701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.072728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.073088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.073370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.073398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.073770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.074120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.074146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.074475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.074721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.074748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.075096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.075403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.075432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.075803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.076111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.076138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.076467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.076697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.076728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.077082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.077437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.077465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.077836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.078174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.078201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.078540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.078886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.078913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.079245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.079572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.079600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.079929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.080282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.080309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.080646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.080958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.080984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.081365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.081693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.081721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.082061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.082389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.906 [2024-07-22 18:10:16.082418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.906 qpair failed and we were unable to recover it. 00:33:11.906 [2024-07-22 18:10:16.082810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.083062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.083090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.083433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.083772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.083800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.084147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.084503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.084530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.084860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.085210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.085236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.085572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.085922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.085949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.086296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.086542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.086571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.086897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.087244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.087270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.087629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.087971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.087998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.088334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.088751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.088779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.089126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.089466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.089493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.089882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.090114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.090141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.090580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.090833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.090865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.091238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.091569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.091596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.092006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.092314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.092341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.092718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.093073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.093100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.093437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.093780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.093807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.094172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.094440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.094467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.094820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.095161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.095188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.095483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.095843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.095870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.096199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.096551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.096579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.096904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.097251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.097278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.097645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.097977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.098003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.098228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.098498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.098526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.098855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.099210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.099238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.099569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.099887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.099914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.100238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.100563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.100590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.100939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.101280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.101307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.101690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.102033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.102059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.102388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.102769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.907 [2024-07-22 18:10:16.102796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.907 qpair failed and we were unable to recover it. 00:33:11.907 [2024-07-22 18:10:16.103127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.103478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.103505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.103856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.104203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.104230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.104609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.104933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.104960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.105286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.105647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.105674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.106031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.106381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.106411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.106761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.107082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.107108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.107327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.107662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.107691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.108032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.108381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.108409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.108718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.109087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.109113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.109467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.109820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.109847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.110152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.110513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.110541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.110873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.111190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.111216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.111558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.111778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.111805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.112178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.112534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.112561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.112916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.113261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.113288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.113620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.113995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.114022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.114386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.114727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.114755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.115139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.115492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.115520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.115791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.116129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.116156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.116517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.116874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.116901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.117223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.117578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.117606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.117953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.118303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.118330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.118727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.119077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.119105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.119467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.119866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.119893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.120228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.120582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.120609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.120955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.121329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.121363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.121752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.122099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.122127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.122481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.122883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.122910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.908 [2024-07-22 18:10:16.123235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.123570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.908 [2024-07-22 18:10:16.123598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.908 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.123954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.124296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.124322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.124737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.125053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.125080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.125441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.125814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.125841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.126190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.126515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.126543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.126892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.127242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.127274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.127613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.127961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.127988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.128338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.128720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.128749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.129082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.129422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.129451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.129772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.130118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.130145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.130549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.130863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.130889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.131233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.131533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.131562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.131906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.132234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.132261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.132616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.132988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.133015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.133252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.133620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.133648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.133977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.134097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.134133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.134517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.134837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.134865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.135222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.135533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.135561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.135939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.136263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.136290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.136555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.136924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.136952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.137303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.137648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.137676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.138017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.138390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.138420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.138744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.139101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.139129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.139364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.139742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.139770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.140120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.140466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.140494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.140841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.141203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.141229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.141561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.141831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.141858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.142209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.142578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.142606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.142958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.143275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.143301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.909 qpair failed and we were unable to recover it. 00:33:11.909 [2024-07-22 18:10:16.143619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.909 [2024-07-22 18:10:16.143935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.143962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.144317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.144668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.144696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.145028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.145378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.145406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.145728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.146077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.146104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.146501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.146879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.146906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.147254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.147491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.147519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.147887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.148238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.148265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.148630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.148922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.148950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.149345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.149581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.149608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.150002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.150369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.150396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.150781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.151086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.151114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.151369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.151707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.151733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.152118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.152479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.152507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.152866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.153210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.153237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.153581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.153936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.153962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.154317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.154554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.154585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.154981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.155324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.155359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.155688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.156067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.156093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.156329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.156693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.156720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.157081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.157439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.157467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.157805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.158186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.158214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.158468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.158823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.158849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.159144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.159511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.159539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.159887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.160121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.160147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.160531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.160760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.910 [2024-07-22 18:10:16.160787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.910 qpair failed and we were unable to recover it. 00:33:11.910 [2024-07-22 18:10:16.161175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.911 [2024-07-22 18:10:16.161517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.911 [2024-07-22 18:10:16.161546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.911 qpair failed and we were unable to recover it. 00:33:11.911 [2024-07-22 18:10:16.161899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.911 [2024-07-22 18:10:16.162248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.911 [2024-07-22 18:10:16.162275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.911 qpair failed and we were unable to recover it. 00:33:11.911 [2024-07-22 18:10:16.162611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.911 [2024-07-22 18:10:16.162976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:11.911 [2024-07-22 18:10:16.163003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:11.911 qpair failed and we were unable to recover it. 00:33:11.911 [2024-07-22 18:10:16.163370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.163769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.163796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.164129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.164442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.164470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.164829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.165084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.165115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.165441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.165688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.165715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.165952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.166179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.166208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.166497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.166855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.166883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.167216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.167587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.167617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.167944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.168274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.168300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.168665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.169009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.169036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.169389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.169769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.169801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.170150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.170502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.170529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.170881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.171234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.171261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.171627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.171856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.171883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.172272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.172498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.172526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.185 [2024-07-22 18:10:16.172790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.173172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.185 [2024-07-22 18:10:16.173200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.185 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.173533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.173878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.173905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.174297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.174619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.174647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.175020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.175372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.175399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.175624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.175987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.176014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.176381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.176711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.176738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.177094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.177449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.177478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.177830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.178064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.178095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.178465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.178836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.178863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.179221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.179624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.179651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.180003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.180363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.180391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.180785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.181093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.181120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.181473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.181822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.181848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.182275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.182586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.182613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.183008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.183362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.183390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.183792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.184027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.184054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.184480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.184810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.184837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.185180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.185532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.185561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.185918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.186161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.186188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.186542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.186888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.186916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.187125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.187468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.187497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.187876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.188159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.188186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.188572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.188935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.188962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.189296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.189648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.189678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.190030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.190360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.190388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.190800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.191154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.191182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.191531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.191882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.191908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.192298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.192616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.192644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.193004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.193419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.186 [2024-07-22 18:10:16.193447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.186 qpair failed and we were unable to recover it. 00:33:12.186 [2024-07-22 18:10:16.193665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.193893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.193924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.194289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.194627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.194654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.195054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.195433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.195460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.195816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.196138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.196164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.196496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.196870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.196897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.197289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.197608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.197636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.197975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.198324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.198358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.198699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.199022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.199049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.199417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.199803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.199829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.200200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.200476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.200504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.200866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.201171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.201198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.201541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.201891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.201918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.202273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.202628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.202656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.203001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.203286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.203313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.203645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.203993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.204020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.204356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.204705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.204732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.205085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.205431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.205460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.205717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.206096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.206128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.206475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.206831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.206857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.207192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.207539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.207567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.207918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.208269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.208295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.208655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.208994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.209022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.209294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.209642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.209671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.209947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.210292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.210319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.210680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.210916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.210943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.211271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.211639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.211666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.211914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.212274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.212301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.212635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.212987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.213014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.187 [2024-07-22 18:10:16.213371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.213722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.187 [2024-07-22 18:10:16.213749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.187 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.214108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.214317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.214344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.214791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.215138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.215165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.215389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.215713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.215739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.215973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.216196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.216225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.216493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.217093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.217128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.217476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.217889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.217916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.218244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.218616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.218644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.219027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.219398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.219427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.219786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.220139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.220166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.220551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.220933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.220961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.221311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.221668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.221696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.222054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.222402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.222429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.222798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.223140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.223168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.223533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.223901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.223928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.224261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.224620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.224647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.225019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.225365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.225393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.225783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.226092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.226118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.226451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.226807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.226833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.227079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.227431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.227461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.227805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.228124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.228150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.228535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.228864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.228890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.229284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.229515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.229546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.229919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.230280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.230307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.230704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.231044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.231070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.231422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.231773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.231800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.232195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.232493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.232520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.232875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.233226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.233254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.233658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.233970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.233997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.188 qpair failed and we were unable to recover it. 00:33:12.188 [2024-07-22 18:10:16.234368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.234722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.188 [2024-07-22 18:10:16.234750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.235102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.235517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.235545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.235891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.236197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.236224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.236570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.236943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.236970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.237366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.237690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.237716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.238001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.238367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.238395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.238740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.239107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.239134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.239524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.239896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.239922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.240261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.240597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.240625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.241018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.241380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.241408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.241802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.242016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.242044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.242387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.242553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.242585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.242946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.243284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.243310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.243675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.244022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.244048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.244266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.244639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.244668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.245040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.245382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.245409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.245664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.245916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.245943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.246336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.246597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.246625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.246947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.247312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.247339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.247704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.248046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.248073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.248433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.248768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.248795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.249187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.249419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.249453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.249788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.250136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.250163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.250604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.250851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.250880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.251246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.251654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.251682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.252016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.252382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.252412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.252644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.252997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.253025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.253386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.253745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.253772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.189 qpair failed and we were unable to recover it. 00:33:12.189 [2024-07-22 18:10:16.254138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.254490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.189 [2024-07-22 18:10:16.254518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.254876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.255213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.255241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.255652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.256005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.256032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.256379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.256703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.256731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.257021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.257378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.257406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.257650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.257970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.257998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.258388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.258613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.258640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.258873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.259249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.259280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.259672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.260022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.260049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.260419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.260756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.260782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.261145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.261447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.261474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.261862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.262214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.262241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.262566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.262791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.262818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.263070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.263430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.263458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.263752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.263971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.263997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.264233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.264339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.264376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.264603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.264999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.265026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.265391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.265784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.265811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.266060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.266361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.266389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.266741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.267125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.267152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.267509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.267848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.267874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.268240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.268592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.268620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.268980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.269320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.190 [2024-07-22 18:10:16.269347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.190 qpair failed and we were unable to recover it. 00:33:12.190 [2024-07-22 18:10:16.269733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.269938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.269965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.270348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.270691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.270718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.270959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.271242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.271268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.271644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.272020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.272046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.272380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.272698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.272725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.272949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.273274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.273301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.273679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.274034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.274061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.274186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.274570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.274597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.274965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.275290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.275317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.275721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.276074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.276101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.276456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.276790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.276817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.277149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.277408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.277436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.277815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.278170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.278196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.278508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.278879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.278905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.279234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.279462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.279493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.279816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.280161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.280188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.280542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.280893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.280921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.281271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.281662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.281689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.281941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.282281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.282308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.282647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.282976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.283003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.283355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.283753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.283779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.284111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.284478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.284513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.284868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.285213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.285239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.285570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.285926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.285952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.286367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.286604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.286630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.286991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.287346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.287381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.287627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.288009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.288036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.288389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.288735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.288762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.191 [2024-07-22 18:10:16.289118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.289463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.191 [2024-07-22 18:10:16.289490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.191 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.289869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.290214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.290240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.290651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.290956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.290982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.291338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.291572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.291602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.291934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.292295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.292321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.292647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.293010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.293036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.293397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.293708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.293735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.294069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.294426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.294454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.294787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.295132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.295158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.295401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.295791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.295818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.296151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.296505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.296533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.296888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.297231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.297257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.297627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.297947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.297973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.298330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.298676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.298704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.299048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.299410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.299439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.299793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.300005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.300031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.300347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.300702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.300728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.301088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.301441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.301469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.301832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.302185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.302212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.302553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.302921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.302947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.303259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.303604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.303632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.303919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.304248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.304276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.304615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.304982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.305008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.305339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.305594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.305620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.305961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.306315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.306342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.306677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.307022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.307049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.307404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.307777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.307804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.308136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.308495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.308523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.308872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.309222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.309249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.192 qpair failed and we were unable to recover it. 00:33:12.192 [2024-07-22 18:10:16.309577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.192 [2024-07-22 18:10:16.309925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.309952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.310206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.310432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.310459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.310835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.311211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.311238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.311546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.311915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.311942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.312310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.312637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.312663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.313012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.313368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.313397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.313758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.313986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.314017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.314328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.314656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.314683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.315040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.315364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.315393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.315547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.315907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.315933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.316240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.316580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.316609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.316936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.317265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.317291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.317645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.317969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.317996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.318333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.318694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.318721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.318966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.319309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.319335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.319574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.319923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.319957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.320306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.320699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.320726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.321087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.321400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.321429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.321575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.321828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.321854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.322226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.322571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.322598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.322952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.323307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.323334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.323679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.323919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.323946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.324330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.324705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.324732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.325081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.325406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.325433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.325790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.326140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.326166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.326497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.326870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.326896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.327226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.327471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.327500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.327854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.328194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.328221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.328615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.328934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.328960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.193 qpair failed and we were unable to recover it. 00:33:12.193 [2024-07-22 18:10:16.329289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.193 [2024-07-22 18:10:16.329647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.329675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.330023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.330380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.330406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.330752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.331102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.331129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.331477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.331658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.331688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.332059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.332413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.332440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.332774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.333189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.333215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.333622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.333991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.334017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.334263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.334488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.334516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.334803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.335147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.335173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.335506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.335829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.335855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.336214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.336510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.336537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.336894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.337246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.337272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.337651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.337859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.337885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.338246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.338656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.338684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.339032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.339386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.339412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.339793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.340144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.340170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.340426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.340675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.340701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.341061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.341379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.341407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.341650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.342033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.342059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.342304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.342707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.342735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.342988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.343308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.343334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.343700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.344015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.344041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.344373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.344752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.344778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.345090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.345444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.345472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.194 qpair failed and we were unable to recover it. 00:33:12.194 [2024-07-22 18:10:16.345818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.194 [2024-07-22 18:10:16.346169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.346196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.346527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.346871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.346897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.347246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.347573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.347601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.347996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.348329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.348364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.348686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.349033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.349059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.349397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.349764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.349791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.350114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.350326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.350359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.350694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.351054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.351081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.351415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.351793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.351819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.352029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.352248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.352278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.352611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.352937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.352964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.353320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.353669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.353697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.354056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.354409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.354437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.354771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.355149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.355182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.355528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.355846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.355872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.356218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.356582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.356610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.356962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.357305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.357331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.357625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.357978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.358004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.358371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.358694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.358721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.359046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.359409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.359436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.359805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.360148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.360175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.360548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.360849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.360875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.361211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.361573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.361599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.361844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.362175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.362202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.362528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.362904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.362930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.363331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.363571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.363598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.363983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.364336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.364372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.364717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.365062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.365088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.195 [2024-07-22 18:10:16.365415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.365764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.195 [2024-07-22 18:10:16.365790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.195 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.366140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.366369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.366396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.366789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.367133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.367159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.367517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.367729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.367755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.368130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.368500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.368528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.368876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.369194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.369221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.369608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.369924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.369950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.370306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.370658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.370685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.371040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.371381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.371410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.371795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.372142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.372168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.372530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.372827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.372853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.373211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.373556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.373584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.373906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.374086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.374112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.374533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.374858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.374884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.375231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.375468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.375499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.375851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.376201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.376227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.376568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.376790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.376817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.376928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.377228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.377255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.377619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.377963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.377990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.378379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.378734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.378760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.379005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.379336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.379378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.379763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.380000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.380030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.380413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.380767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.380794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.381149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.381541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.381568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.381928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.382291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.382317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.382659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.383004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.383031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.383448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.383778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.383805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.384171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.384523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.384551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.384949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.385174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.385204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.196 qpair failed and we were unable to recover it. 00:33:12.196 [2024-07-22 18:10:16.385475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.385690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.196 [2024-07-22 18:10:16.385717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.386157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.386485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.386512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.386901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.387224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.387250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.387598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.387928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.387954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.388324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.388692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.388719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.388966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.389182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.389209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.389535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.389853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.389879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.390191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.390569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.390602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.390952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.391273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.391299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.391657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.392001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.392028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.392422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.392774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.392800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.393196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.393557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.393585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.393916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.394265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.394291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.394632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.394846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.394872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.395097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.395457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.395483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.395840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.396215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.396242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.396640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.396949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.396975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.397326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.397685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.397718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.398119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.398477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.398505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.398766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.399098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.399125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.399403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.399652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.399682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.400059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.400413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.400440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.400800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.401151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.401177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.401508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.401839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.401865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.402237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.402573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.402601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.402833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.403142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.403168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.403416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.403625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.403651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.404039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.404392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.404420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.404800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.405009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.405035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.197 qpair failed and we were unable to recover it. 00:33:12.197 [2024-07-22 18:10:16.405379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.197 [2024-07-22 18:10:16.405699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.405726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.405952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.406308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.406335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.406737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.407090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.407117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.407397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.407767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.407793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.408151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.408507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.408534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.408890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.409234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.409260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.409602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.409946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.409973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.410333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.410722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.410749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.411100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.411449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.411477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.411881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.412106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.412132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.412509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.412928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.412954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.413187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.413544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.413571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.413927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.414277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.414303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.414543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.414886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.414913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.415251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.415602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.415630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.415962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.416272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.416299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.416637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.416960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.416987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.417341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.417690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.417717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.418049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.418390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.418417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.418804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.419075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.419102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.419472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.419733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.419763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.420146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.420499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.420527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.420670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.420939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.420965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.421316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.421676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.421704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.421948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.422288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.422315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.422675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.422997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.423023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.423392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.423756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.423782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.424108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.424330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.424376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.424760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.425110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.198 [2024-07-22 18:10:16.425136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.198 qpair failed and we were unable to recover it. 00:33:12.198 [2024-07-22 18:10:16.425502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.425886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.425913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.426154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.426489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.426516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.426872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.427195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.427222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.427605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.427962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.427988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.428369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.428716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.428742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.429079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.429434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.429461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.429848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.430077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.430103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.430477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.430836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.430863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.431092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.431397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.431425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.431795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.432130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.432156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.432512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.432884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.432917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.433266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.433541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.433568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.433891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.434262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.434289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.434544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.434930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.434956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.435318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.435676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.435704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.436052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.436410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.436439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.436809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.437163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.437189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.437592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.437957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.437983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.438315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.438673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.438700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.439050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.439403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.439430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.439765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.440131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.440158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.440521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.440832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.440858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.441093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.441418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.441446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.441844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.442175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.442201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.199 [2024-07-22 18:10:16.442633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.443004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.199 [2024-07-22 18:10:16.443031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.199 qpair failed and we were unable to recover it. 00:33:12.200 [2024-07-22 18:10:16.443275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.443616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.443643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.200 qpair failed and we were unable to recover it. 00:33:12.200 [2024-07-22 18:10:16.444069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.444407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.444435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.200 qpair failed and we were unable to recover it. 00:33:12.200 [2024-07-22 18:10:16.444776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.445103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.445129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.200 qpair failed and we were unable to recover it. 00:33:12.200 [2024-07-22 18:10:16.445496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.445872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.445898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.200 qpair failed and we were unable to recover it. 00:33:12.200 [2024-07-22 18:10:16.446233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.446640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.446668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.200 qpair failed and we were unable to recover it. 00:33:12.200 [2024-07-22 18:10:16.447049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.447414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.200 [2024-07-22 18:10:16.447442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.200 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.447781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.448016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.448042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.448382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.448813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.448840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.449198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.449551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.449579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.449809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.450032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.450058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.450334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.450697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.450724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.451115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.451459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.451486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.451852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.452214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.452240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.452619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.452999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.453026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.453286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.453649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.453675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.454039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.454405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.454432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.454793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.455087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.455113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.455477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.455818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.455844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.456213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.456551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.456579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.456934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.457288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.457314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.457658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.457972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.457999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.458386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.458755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.458782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.459147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.459459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.459486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.459871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.460169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.460195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.460481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.460792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.460819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.461156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.461496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.461523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.461909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.462317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.462344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.462600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.462836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.462865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.463230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.463582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.463609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.463950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.464305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.464331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.464741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.465070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.465096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.465411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.465765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.465791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.466083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.466435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.466462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.466828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.467198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.467224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.467598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.467892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.467919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.468291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.468661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.468688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.469026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.469257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.469289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.469688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.470005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.470031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.470387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.470806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.470832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.471061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.471414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.471442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.471775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.472158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.472184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.472532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.472768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.472798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.473240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.473502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.473530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.473961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.474317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.474343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.474747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.474979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.475005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.475374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.475731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.475757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.476112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.476452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.476480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.476839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.477156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.477183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.477545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.477902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.477929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.478269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.478646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.478674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.479026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.479345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.479380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.479702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.479922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.479949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.480184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.480526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.480555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.480875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.481232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.481258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.481610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.481889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.481916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.554 qpair failed and we were unable to recover it. 00:33:12.554 [2024-07-22 18:10:16.482280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.482657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.554 [2024-07-22 18:10:16.482685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.482943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.483362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.483389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.483768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.484112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.484139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.484386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.484800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.484827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.485075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.485422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.485451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.485787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.486148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.486174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.486514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.486884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.486910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.487286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.487523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.487550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.487969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.488321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.488347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.488707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.489089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.489115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.489455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.489837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.489863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.490218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.490575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.490602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.490745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.491089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.491118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.491376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.491661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.491687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.491935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.492187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.492212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.492665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.492887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.492914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.493317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.493699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.493727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.494099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.494419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.494448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.494843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.495235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.495262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.495601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.495958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.495986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.496336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.496727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.496754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.497140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.497487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.497515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.497896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.498134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.498161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.498528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.498890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.498916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.499289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.499559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.499587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.499930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.500178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.500205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.500533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.500877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.500904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.501253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.501462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.501490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.501813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.502131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.502158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.502500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.502851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.502878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.503242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.503515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.503542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.503789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.504101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.504128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.504362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.504539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.504572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.504907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.505126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.505153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.505404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.505674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.505701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.506059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.506276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.506302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.506555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.506914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.506940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.507293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.507645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.507674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.508047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.508275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.508301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.508652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.509013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.509040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.509399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.509783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.509810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.510177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.510542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.510569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.510921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.511183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.511216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.511505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.511874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.511900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.512293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.512636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.512664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.512920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.513337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.513372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.513800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.514110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.514137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.514385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.514604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.514630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.514891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.515202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.515229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.515495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.515885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.515911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.516142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.516392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.516420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.516800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.517163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.517190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.517577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.517974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.518000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.555 qpair failed and we were unable to recover it. 00:33:12.555 [2024-07-22 18:10:16.518253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.518676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.555 [2024-07-22 18:10:16.518703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.519072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.519426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.519454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.519707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.520035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.520061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.520456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.520833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.520859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.521224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.521589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.521616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.522007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.522325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.522370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.522779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.523084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.523110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.523444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.523840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.523867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.524235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.524580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.524608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.524972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.525330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.525365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.525705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.525866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.525894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.526219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.526561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.526588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.526952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.527300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.527326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.527699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.527937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.527966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.528319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.528706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.528733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.528996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.529365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.529392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.529767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.530107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.530133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.530548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.530912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.530938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.531300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.531611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.531638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.532027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.532268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.532294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.532644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.532980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.533007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.533343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.533703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.533730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.534083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.534479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.534507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.534896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.535243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.535270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.535618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.535979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.536005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.536339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.536708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.536735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.537100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.537461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.537488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.537841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.538197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.538223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.538600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.538829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.538855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.539219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.539570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.539597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.539953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.540273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.540300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.540714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.541031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.541057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.541422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.541649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.541676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.542093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.542449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.542478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.542861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.543224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.543250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.543623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.543982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.544008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.544362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.544703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.544729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.545067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.545430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.545457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.545780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.546162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.546189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.546538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.546862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.546888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.547225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.547579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.547612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.547961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.548079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.548109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.548477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.548806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.548832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.549227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.549542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.549569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.549804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.550049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.556 [2024-07-22 18:10:16.550080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.556 qpair failed and we were unable to recover it. 00:33:12.556 [2024-07-22 18:10:16.550429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.550786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.550813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.551231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.551594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.551622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.551841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.552183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.552209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.552546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.552963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.552990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.553237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.553569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.553597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.553909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.554143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.554174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.554583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.554934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.554960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.555331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.555701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.555728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.556070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.556388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.556415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.556787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.557153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.557179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.557510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.557885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.557912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.558135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.558398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.558427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.558791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.559009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.559035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.559265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.559654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.559681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.560006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.560339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.560373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.560737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.561090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.561117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.561451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.561812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.561839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.562060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.562442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.562470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.562829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.563180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.563207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.563570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.563926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.563952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.564293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.564653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.564680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.565047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.565395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.565422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.565681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.566073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.566099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.566387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.566779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.566807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.567144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.567340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.567378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.567741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.568093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.568120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.568477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.568861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.568888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.569258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.569516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.569543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.569915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.570322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.570357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.570600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.570915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.570941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.571301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.571670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.571698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.571910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.572234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.572261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.572606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.572958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.572984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.573395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.573738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.573764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.574124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.574344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.574392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.574757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.575092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.575119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.575457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.575793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.575821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.576187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.576539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.576567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.576953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.577309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.577335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.577675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.578019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.578046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.578410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.578770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.578797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.579156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.579412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.579439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.579847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.580176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.580203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.580458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.580849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.580876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.581112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.581447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.581474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.581829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.582184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.582211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.582614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.582970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.583007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.583260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.583605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.583633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.583965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.584320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.584346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.584680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.585052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.585079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.585310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.585642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.585670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.586002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.586215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.586242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.586588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.586943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.586970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.587302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.587705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.587732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.557 qpair failed and we were unable to recover it. 00:33:12.557 [2024-07-22 18:10:16.588093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.557 [2024-07-22 18:10:16.588447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.588475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.588841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.589200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.589227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.589596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.589949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.589975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.590346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.590735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.590761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.591129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.591471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.591499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.591860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.592217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.592243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.592580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.592925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.592951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.593312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.593668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.593695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.594041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.594290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.594317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.594683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.595003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.595029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.595396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.595774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.595800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.596163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.596517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.596545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.596758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.597113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.597141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.597502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.597750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.597780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.598023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.598398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.598425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.598836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.599144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.599171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.599502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.599893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.599919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.600248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.600572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.600599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.600946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.601295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.601322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.601703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.602046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.602073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.602420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.602759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.602785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.603114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.603471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.603499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.603858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.604202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.604228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.604592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.604907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.604934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.605292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.605617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.605645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.605995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.606369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.606397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.606765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.607092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.607118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.607450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.607804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.607830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.608070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.608306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.608333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.608724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.609100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.609127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.609337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.609688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.609715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.610142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.610511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.610539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.610901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.611288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.611315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.611662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.612031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.612058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.612428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.612770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.612796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.613169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.613528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.613556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.613910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.614266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.614293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.614513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.614729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.614755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.615165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.615520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.615547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.615921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.616269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.616296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.616662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.617003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.617030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.617398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.617786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.617812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.618180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.618525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.618552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.618795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.619148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.619180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.619546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.619894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.619920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.620286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.620649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.620676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.621012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.621237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.621263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.621645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.621971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.621998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.622362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.622708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.622734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.623099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.623471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.623498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.623860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.624215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.624242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.624606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.624927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.624953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.625264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.625627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.625655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.625988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.626341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.626380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.626829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.627143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.558 [2024-07-22 18:10:16.627169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.558 qpair failed and we were unable to recover it. 00:33:12.558 [2024-07-22 18:10:16.627508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.627897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.627924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.628276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.628502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.628533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.628892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.629229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.629255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.629626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.630009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.630035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.630324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.630685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.630713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.631104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.631332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.631370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.631761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.632118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.632145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.632437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.632772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.632798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.633132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.633469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.633497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.633853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.634206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.634233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.634569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.634921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.634948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.635305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.635699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.635728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.635919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.636284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.636311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.636665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.636978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.637005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.637292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.637659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.637688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.638048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.638355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.638383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.638626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.638987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.639014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.639378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.639590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.639617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.639996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.640300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.640327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.640737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.641085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.641111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.641474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.641830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.641857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.642221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.642607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.642634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.643020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.643368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.643395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.643763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.644112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.644138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.644494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.644853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.644881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.645224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.645585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.645613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.646011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.646241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.646267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.646615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.646971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.646998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.647235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.647628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.647655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.648014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.648340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.648374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.648700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.649051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.649077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.649426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.649792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.649819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.650174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.650483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.650511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.650867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.651223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.651250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.651628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.651927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.651954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.652300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.652612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.652641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.652993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.653245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.653273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.653675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.654002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.654029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.654283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.654611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.654638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.654926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.655279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.655311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.655714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.656063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.656090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.656455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.656813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.656839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.657200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.657532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.657561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.657894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.658261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.658287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.658645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.658957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.658983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.659338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.659720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.659746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.660074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.660423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.660451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.660813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.661171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.661198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.661497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.661885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.661911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.662273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.662582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.662615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.559 qpair failed and we were unable to recover it. 00:33:12.559 [2024-07-22 18:10:16.662851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.559 [2024-07-22 18:10:16.663113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.663143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.663289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.663679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.663707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.663928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.664279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.664306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.664676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.665029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.665055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.665389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.665747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.665774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.666157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.666493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.666520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.666768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.667076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.667102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.667458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.667794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.667822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.668164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.668489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.668517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.668860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.669080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.669107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.669472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.669837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.669864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.670224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.670537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.670565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.670920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.671269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.671295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.671540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.671884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.671911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.672273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.672614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.672641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.673004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.673369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.673397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.673759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.674096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.674122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.674485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.674817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.674843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.675228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.675578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.675604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.675965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.676278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.676305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.676705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.677083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.677110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.677446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.677801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.677828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.678205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.678559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.678587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.678923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.679236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.679262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.679598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.679916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.679942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.680298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.680629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.680658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.681016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.681373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.681401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.681799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.682030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.682057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.682414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.682766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.682793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.683023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.683417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.683444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.683803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.684197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.684224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.684532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.684838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.684864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.685206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.685559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.685586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.685919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.686266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.686293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.686627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.686983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.687010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.687407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.687769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.687795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.688127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.688480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.688507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.688860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.689215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.689242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.689602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.689916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.689942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.690287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.690515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.690542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.690975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.691330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.691366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.691632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.691977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.692004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.692384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.692788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.692817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.693200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.693553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.693581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.693787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.694127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.694154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.694488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.694779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.694805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.695173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.695526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.695553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.695906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.696273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.696299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.696661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.696997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.697024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.697403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.697798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.697824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.698190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.698541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.698574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.698925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.699302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.699329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.699701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.700058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.700084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.700318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.700713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.560 [2024-07-22 18:10:16.700740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.560 qpair failed and we were unable to recover it. 00:33:12.560 [2024-07-22 18:10:16.701092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.701451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.701479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.701839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.702157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.702183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.702543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.702897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.702923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.703153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.703387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.703415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.703789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.704022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.704048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.704429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.704797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.704824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.705160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.705539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.705568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.705926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.706263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.706289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.706639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.706993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.707019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.707242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.707576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.707603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.707829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.708195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.708221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.708577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.708933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.708959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.709316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.709708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.709736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.710096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.710477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.710505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.710881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.711263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.711290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.711659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.711983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.712009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.712401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.712694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.712721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.713122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.713469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.713496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.713831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.714126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.714153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.714519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.714876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.714904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.715258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.715636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.715663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.715947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.716310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.716336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.716654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.717024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.717051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.717411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.717737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.717764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.718033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.718396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.718423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.718802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.719044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.719070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.719435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.719778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.719805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.720053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.720284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.720314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.720705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.721089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.721115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.721516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.721877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.721904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.722241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.722578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.722606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.722962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.723321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.723347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.723732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.723960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.723987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.724390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.724752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.724779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.725111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.725474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.725501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.725841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.726195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.726222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.726461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.726812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.726839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.727244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.727642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.727669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.728004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.728383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.728411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.728806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.729166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.729192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.729565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.729929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.729955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.730347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.730709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.730735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.731106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.731475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.731502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.731876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.732233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.732259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.732631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.732990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.733017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.733406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.733773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.733800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.561 [2024-07-22 18:10:16.734159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.734517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.561 [2024-07-22 18:10:16.734544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.561 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.734920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.735293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.735326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.735685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.736044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.736070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.736475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.736727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.736757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.736978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.737372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.737399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.737780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.738137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.738163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.738416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.738768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.738793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.739042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.739395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.739423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.739813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.740174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.740200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.740575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.740933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.740960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.741337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.741698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.741725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.741953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.742188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.742214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.742557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.742885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.742912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.743254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.743624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.743652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.744012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.744383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.744411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.744827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.745176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.745203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.745590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.745927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.745954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.746202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.746430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.746458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.746703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.747061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.747088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.747443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.747812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.747839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.748099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.748296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.748322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.748576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.748920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.748947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.749346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.749619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.749646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.750038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.750276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.750302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.750657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.750896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.750923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.751180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.751470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.751498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.751924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.752288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.752315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.752729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.753083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.753110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.753400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.753787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.753813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.754216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.754429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.754457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.754860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.755193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.755220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.755523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.755899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.755926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.756320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.756576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.756604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.756903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.757294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.757321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.757635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.757981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.758008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.758271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.758624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.758652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.759086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.759346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.759384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.759770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.760131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.760158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.760414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.760696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.760723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.761087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.761440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.761469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.761873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.762113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.762140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.762396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.762780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.762807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.763078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.763309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.763340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.763622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.764050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.764078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.764434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.764795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.764822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.765078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.765448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.765477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.765855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.766226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.766253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.766508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.766752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.766779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.767191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.767514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.767542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.767818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.768055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.768081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.768469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.768836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.768863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.769218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.769561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.769589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.769932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.770304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.770338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.770743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.771113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.771141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.562 qpair failed and we were unable to recover it. 00:33:12.562 [2024-07-22 18:10:16.771493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.771876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.562 [2024-07-22 18:10:16.771905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.772263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.772606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.772636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.773010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.773380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.773409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.773824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.774187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.774217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.774516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.774734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.774761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.775123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.775397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.775424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.775803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.776118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.776145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.776393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.776656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.776682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.777056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.777292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.777327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.777610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.777978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.778004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.778342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.778657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.778684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.779053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.779221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.779247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.779683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.779915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.779940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.780286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.780650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.780679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.780914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.781244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.781270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.781634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.781855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.781883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.782273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.782632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.782661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.783060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.783437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.783466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.783864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.784196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.784225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.784572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.784929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.784956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.785322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.785741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.785770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.786005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.786313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.786341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.786636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.786955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.786984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.787230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.787564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.787592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.787966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.788286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.788313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.788665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.789051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.789078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.789425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.789764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.789792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.790131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.790487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.790515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.790918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.791250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.791277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.791617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.791949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.791976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.792336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.792736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.792763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.793136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.793380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.793409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.793797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.794159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.794185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.794428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.794779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.794806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.795155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.795513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.795540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.795905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.796259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.796286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.796619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.796982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.797008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.797244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.797572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.797599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.797960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.798317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.798344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.798742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.799133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.799160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.799516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.799885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.799911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.800253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.800493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.800520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.800904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.801255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.801281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.801660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.801994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.802021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.802387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.802748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.802775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.803136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.803369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.803397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.803769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.804122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.804149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.804500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.804847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.804874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.805232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.805424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.805452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.805818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.806205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.806231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.806572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.806927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.806954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.807320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.807684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.807711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.563 [2024-07-22 18:10:16.808071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.808427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.563 [2024-07-22 18:10:16.808455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.563 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.808828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.809163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.809190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.809559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.809843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.809870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.810240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.810588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.810616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.810973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.811341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.811376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.811737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.812086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.812112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.812453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.812842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.812869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.813230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.813573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.813606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.813964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.814366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.814394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.814734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.815127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.815154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.815522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.815908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.815935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.816269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.816684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.816712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.817066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.817383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.817410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.817655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.817903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.817934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.818292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.818627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.818655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.819009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.819373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.819400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.819727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.820087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.820113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.820447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.820789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.820816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.821176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.821532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.821560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.821929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.822274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.822301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.822703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.823090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.823116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.564 [2024-07-22 18:10:16.823504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.823900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.564 [2024-07-22 18:10:16.823929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.564 qpair failed and we were unable to recover it. 00:33:12.831 [2024-07-22 18:10:16.824301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.831 [2024-07-22 18:10:16.824567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.831 [2024-07-22 18:10:16.824596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.831 qpair failed and we were unable to recover it. 00:33:12.831 [2024-07-22 18:10:16.824899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.825258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.825285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.825619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.825881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.825911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.826274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.826678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.826706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.827105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.827466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.827493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.827854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.828209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.828236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.828583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.828968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.828994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.829257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.829632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.829660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.830045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.830405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.830432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.830813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.831143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.831170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.831516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.831865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.831892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.832286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.832654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.832682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.833030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.833267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.833296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.833721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.833936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.833964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.834305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.834644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.834671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.834947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.835292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.835318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.835727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.836122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.836148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.836488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.836855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.836883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.837242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.837474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.837502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.837884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.838242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.838268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.838665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.839017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.839044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.839379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.839707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.839733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.840098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.840312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.840338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.840748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.841091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.841117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.841460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.841855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.841882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.842249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.842643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.842670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.843077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.843436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.843464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.843820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.844186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.844213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.844584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.844812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.844842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.832 qpair failed and we were unable to recover it. 00:33:12.832 [2024-07-22 18:10:16.845207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.832 [2024-07-22 18:10:16.845371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.845399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.845784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.846141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.846167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.846531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.846876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.846903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.847269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.847661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.847688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.848055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.848418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.848445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.848702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.849044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.849071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.849329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.849716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.849743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.850080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.850439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.850474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.850852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.851165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.851192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.851533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.851859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.851886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.852294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.852653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.852680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.852934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.853282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.853308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.853688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.854024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.854051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.854415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.854769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.854795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.855156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.855516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.855543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.855834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.856100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.856127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.856467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.856804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.856830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.857192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.857495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.857522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.857845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.858204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.858230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.858569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.858896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.858922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.859157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.859521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.859548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.859913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.860216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.860243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.860576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.860909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.860935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.861289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.861653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.861682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.862076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.862438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.862466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.862846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.863073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.863102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.863302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.863658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.863686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.864051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.864418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.864445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.833 [2024-07-22 18:10:16.864836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.865133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.833 [2024-07-22 18:10:16.865159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.833 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.865500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.865842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.865868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.866199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.866550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.866577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.866824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.867169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.867196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.867536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.867917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.867944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.868176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.868505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.868532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.868892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.869252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.869279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.869618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.869948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.869975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.870306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.870680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.870708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.871068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.871423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.871451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.871815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.872171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.872197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.872579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.872932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.872958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.873295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.873657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.873684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.874047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.874389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.874416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.874787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.875142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.875169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.875540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.875893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.875919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.876258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.876593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.876620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.876999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.877379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.877407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.877796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.878079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.878106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.878482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.878730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.878756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.879016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.879401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.879432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.879802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.880121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.880147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.880409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.880820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.880847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.881185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.881446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.881474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.881875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.882209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.882235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.882590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.882953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.882979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.883235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.883597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.883626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.883990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.884369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.884397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.884782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.885140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.885167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.834 qpair failed and we were unable to recover it. 00:33:12.834 [2024-07-22 18:10:16.885511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.885848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.834 [2024-07-22 18:10:16.885874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.886232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.886579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.886613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.886937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.887263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.887289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.887657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.888026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.888053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.888372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.888716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.888742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.889099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.889449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.889492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.889901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.890147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.890174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.890518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.890901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.890927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.891287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.891617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.891645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.892046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.892416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.892445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.892820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.893177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.893203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.893544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.893904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.893937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.894359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.894731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.894757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.894987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.895224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.895253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.895598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.895931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.895958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.896330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.896632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.896659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.897053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.897413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.897441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.897818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.898181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.898207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.898589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.898907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.898934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.899267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.899624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.899652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.900011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.900372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.900401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.900726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.901085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.901111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.901465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.901787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.901814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.902205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.902537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.902565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.902938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.903303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.903331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.903670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.903895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.903925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.904329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.904594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.904624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.905058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.905380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.905408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.905884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.906218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.835 [2024-07-22 18:10:16.906244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.835 qpair failed and we were unable to recover it. 00:33:12.835 [2024-07-22 18:10:16.906572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.906935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.906961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.907302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.907660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.907689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.908055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.908418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.908445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.908828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.909194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.909221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.909557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.909926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.909953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.910295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.910663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.910691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.911033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.911415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.911443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.911853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.912215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.912242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.912532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.912880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.912907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.913275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.913622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.913651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.914045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.914325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.914368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.914726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.915097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.915125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.915522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.915751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.915782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.916164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.916537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.916565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.916911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.917286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.917313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.917704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.918063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.918089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.918456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.918835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.918863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.919129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.919465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.919493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.919881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.920245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.920271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.920606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.920959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.920985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.921355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.921705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.921731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.922099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.922460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.922489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.836 [2024-07-22 18:10:16.922826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.923074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.836 [2024-07-22 18:10:16.923100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.836 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.923436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.923830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.923856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.924225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.924577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.924605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.924939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.925256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.925283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.925524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.925846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.925873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.926069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.926443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.926471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.926809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.927176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.927203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.927563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.927930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.927956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.928295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.928652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.928681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.928925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.929262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.929289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.929709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.930047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.930074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.930424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.930783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.930815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.931161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.931524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.931551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.931908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.932276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.932303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.932668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.933021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.933049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.933391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.933780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.933807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.934160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.934528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.934556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.934940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.935264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.935291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.935655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.936010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.936037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.936376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.936726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.936753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.937103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.937457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.937485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.937847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.938202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.938229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.938477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.938830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.938858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.939193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.939429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.939460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.939823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.940176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.940203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.940568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.940938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.940965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.941363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.941711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.941738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.941980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.942388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.942418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.942814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.943238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.943265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.837 qpair failed and we were unable to recover it. 00:33:12.837 [2024-07-22 18:10:16.943515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.943915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.837 [2024-07-22 18:10:16.943942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.944283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.944658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.944686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.945038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.945381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.945409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.945679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.946043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.946073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.946438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.946791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.946818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.947218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.947575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.947603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.947977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.948337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.948372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.948746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.949129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.949155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.949510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.949854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.949881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.950274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.950646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.950675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.951003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.951364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.951393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.951766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.952137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.952164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.952528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.952893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.952920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.953279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.953608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.953637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.953968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.954201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.954232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.954648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.955020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.955046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.955416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.955740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.955766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.956002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.956373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.956400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.956800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.957159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.957185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.957526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.957871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.957897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.958270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.958634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.958662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.959016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.959252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.959281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.959521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.959872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.959899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.960214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.960577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.960605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.960964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.961313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.961340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.961687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.962020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.962048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.962424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.962784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.962810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.963211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.963572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.963599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.963959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.964316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.964343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.838 qpair failed and we were unable to recover it. 00:33:12.838 [2024-07-22 18:10:16.964555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.838 [2024-07-22 18:10:16.964925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.964952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.965333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.965704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.965732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.966081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.966259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.966290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.966647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.966979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.967005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.967251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.967575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.967609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.967968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.968335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.968369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.968702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.969057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.969084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.969453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.969836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.969863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.970182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.970532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.970560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.970937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.971293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.971319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.971689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.972047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.972073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.972436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.972848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.972875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.973246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.973486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.973513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.973945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.974180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.974207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.974591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.974948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.974975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.975210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.975658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.975686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.976044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.976255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.976282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.976614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.976966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.976992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.977387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.977778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.977806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.978161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.978520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.978549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.978914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.979275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.979301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.979560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.979794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.979821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.980170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.980409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.980437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.980824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.981171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.981198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.981545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.981905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.981931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.982266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.982485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.982512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.982892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.983255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.983281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.983652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.984065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.984091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.984503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.984854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.984880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.839 qpair failed and we were unable to recover it. 00:33:12.839 [2024-07-22 18:10:16.985248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.839 [2024-07-22 18:10:16.985575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.985603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.985936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.986305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.986332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.986737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.986975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.987002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.987393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.987758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.987785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.988121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.988472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.988499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.988857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.989217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.989243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.989577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.989888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.989916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.990268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.990593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.990620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.990970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.991305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.991331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.991641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.991996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.992022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.992251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.992564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.992593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.992966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.993315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.993341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.993675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.994029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.994056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.994398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.994725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.994752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.995126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.995489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.995516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.995909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.996249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.996277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.996674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.997023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.997050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.997437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.997799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.997826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.998182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.998539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.998567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.998950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.999298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.999325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:16.999748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:16.999979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.000010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.000402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.000791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.000819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.001110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.001301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.001331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.001731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.002102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.002129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.002466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.002838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.002868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.003263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.003622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.003653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.004008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.004395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.004431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.004801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.005157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.005184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.840 [2024-07-22 18:10:17.005431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.005833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.840 [2024-07-22 18:10:17.005860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.840 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.006149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.006515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.006543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.006912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.007260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.007288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.007690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.007918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.007948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.008237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.008459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.008487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.008857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.009255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.009282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.009563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.009915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.009942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.010181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.010585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.010613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.010949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.011286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.011313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.011756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.012119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.012146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.012499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.012844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.012871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.013268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.013619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.013646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.014018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.014332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.014367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.014638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.014893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.014924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.015265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.015503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.015531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.015929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.016264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.016291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.016509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.016852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.016879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.017088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.017413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.017441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.017815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.018164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.018191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.018575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.018931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.018958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.019297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.019661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.019688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.020056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.020418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.020445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.020814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.021216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.021243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.021646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.021973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.841 [2024-07-22 18:10:17.021999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.841 qpair failed and we were unable to recover it. 00:33:12.841 [2024-07-22 18:10:17.022362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.022735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.022761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.023101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.023463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.023491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.023838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.024197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.024225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.024647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.025015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.025042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.025384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.025777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.025804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.026155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.026400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.026429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.026690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.027105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.027131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.027462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.027856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.027882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.028261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.028637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.028665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.029038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.029422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.029450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.029675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.029998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.030025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.030408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.030767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.030795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.031130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.031492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.031520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.031919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.032271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.032298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.032699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.033060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.033087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.033489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.033815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.033842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.034093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.034386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.034413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.034674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.035042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.035069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.035425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.035773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.035800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.036201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.036541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.036569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.036918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.037281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.037307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.037662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.038021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.038048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.038418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.038792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.038819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.039183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.039413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.039441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.039812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.040175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.040201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.040539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.040906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.040938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.041273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.041587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.041615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.042019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.042394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.042422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.842 qpair failed and we were unable to recover it. 00:33:12.842 [2024-07-22 18:10:17.042791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.043152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.842 [2024-07-22 18:10:17.043178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.043551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.043885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.043912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.044271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.044633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.044661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.045017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.045367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.045395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.045750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.046104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.046130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.046494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.046850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.046877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.047221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.047576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.047604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.047833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.048078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.048114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.048502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.048892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.048918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.049252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.049484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.049511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.049876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.050241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.050267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.050511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.050868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.050896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.051274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.051508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.051536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.051736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.052132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.052158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.052535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.052909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.052935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.053335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.053586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.053614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.053967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.054320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.054347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.054697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.055062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.055089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.055461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.055802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.055828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.056189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.056536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.056564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.056933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.057288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.057315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.057504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.057881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.057909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.058267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.058628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.058656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.058978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.059334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.059372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.059736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.060110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.060136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.060509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.060808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.060835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.061200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.061564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.061591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.061949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.062368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.062396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.062775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.063114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.063141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.843 qpair failed and we were unable to recover it. 00:33:12.843 [2024-07-22 18:10:17.063528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.843 [2024-07-22 18:10:17.063902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.063929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.064205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.064440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.064471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.064829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.065133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.065160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.065482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.065824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.065851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.066194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.066443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.066471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.066745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.067154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.067181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.067562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.067792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.067822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.068211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.068561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.068589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.068988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.069358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.069386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.069777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.070167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.070194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.070529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.070888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.070915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.071323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.071689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.071717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.072077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.072435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.072463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.072828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.073185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.073213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.073540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.073776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.073803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.074211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.074527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.074555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.074920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.075300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.075327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.075587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.075968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.075995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.076370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.076734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.076761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.077129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.077473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.077501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.077772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.078121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.078148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.078487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.078881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.078908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.079268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.079632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.079660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.080018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.080257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.080283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.080525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.080856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.080883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.081248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.081598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.081626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.081993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.082342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.082377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.082611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.082999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.083025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.083391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.083771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.844 [2024-07-22 18:10:17.083798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.844 qpair failed and we were unable to recover it. 00:33:12.844 [2024-07-22 18:10:17.084199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.084562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.084595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.084955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.085183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.085211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.085560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.085917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.085944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.086334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.086686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.086714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.087055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.087430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.087457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.087816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.088167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.088196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.088553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.088912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.088940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.089312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.089668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.089696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.090054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.090449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.090477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.090866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.091133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.091161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.091506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.091741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.091768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.092138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.092492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.092520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.092871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.093231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.093258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.093503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.093844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.093872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.094251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.094547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.094576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.094949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.095301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.095328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.095567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.095940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.095967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.096320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.096550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.096577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.096820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.097179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.097206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.097579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.098034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.098062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.098491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.098859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.098885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.099287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.099669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.845 [2024-07-22 18:10:17.099697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:12.845 qpair failed and we were unable to recover it. 00:33:12.845 [2024-07-22 18:10:17.100109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.100463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.100491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.114 qpair failed and we were unable to recover it. 00:33:13.114 [2024-07-22 18:10:17.100925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.101373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.101402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.114 qpair failed and we were unable to recover it. 00:33:13.114 [2024-07-22 18:10:17.101763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.102169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.102196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.114 qpair failed and we were unable to recover it. 00:33:13.114 [2024-07-22 18:10:17.102578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.102974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.103001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.114 qpair failed and we were unable to recover it. 00:33:13.114 [2024-07-22 18:10:17.103376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.114 [2024-07-22 18:10:17.103751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.103779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.104126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.104484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.104511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.104873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.105232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.105258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.105594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.105954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.105980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.106292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.106674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.106702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.107066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.107464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.107492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.107858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.108214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.108240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.108577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.108899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.108926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.109279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.109648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.109676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.110009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.110363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.110391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.110759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.111117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.111143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.111386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.111736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.111763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.112103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.112337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.112380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.112772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.113138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.113166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.113525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.113870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.113896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.114256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.114600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.114628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.114988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.115356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.115384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.115722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.116106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.116132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.116373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.116786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.116813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.117061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.117393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.117421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.117777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.118090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.118118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.118519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.118754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.118781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.119136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.119380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.119412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.119773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.120147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.120174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.120550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.120865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.120891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.121260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.121620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.121654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.121919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.122269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.122295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.122657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.122995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.123022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.115 [2024-07-22 18:10:17.123396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.123765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.115 [2024-07-22 18:10:17.123792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.115 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.124085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.124330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.124373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.124648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.125015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.125041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.125417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.125818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.125845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.126217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.126448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.126475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.126771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.127133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.127160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.127552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.127773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.127803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.128094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.128455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.128482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.128719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.128973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.129000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.129269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.129672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.129700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.130048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.130406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.130434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.130767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.131149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.131176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.131545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.131900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.131926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.132262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.132632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.132660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.133021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.133376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.133404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.133744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.134101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.134127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.134462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.134819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.134846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.135073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.135428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.135457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.135843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.136198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.136225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.136579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.136935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.136962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.137336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.137744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.137773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.138129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.138516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.138544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.138774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.139109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.139137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.139459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.139702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.139729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.140158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.140521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.140549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.140900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.141264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.141290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.141658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.142036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.142063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.142416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.142769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.142796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.143049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.143402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.116 [2024-07-22 18:10:17.143430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.116 qpair failed and we were unable to recover it. 00:33:13.116 [2024-07-22 18:10:17.143760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.144128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.144154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.144523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.144869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.144896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.145295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.145656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.145684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.146056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.146385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.146413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.146786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.147149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.147175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.147424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.147814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.147841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.148184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.148555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.148584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.148922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.149282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.149308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.149649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.150002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.150029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.150388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.150736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.150763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.151118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.151487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.151515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.151891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.152162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.152188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.152560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.152882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.152908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.153288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.153651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.153680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.154081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.154440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.154467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.154781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.155123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.155149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.155519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.155889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.155915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.156284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.156581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.156609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.156971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.157331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.157366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.157721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.158025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.158059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.158401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.158734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.158760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.159126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.159368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.159396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.159794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.160147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.160174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.160520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.160899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.160926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.161298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.161655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.161683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.162040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.162360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.162388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.162792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.163114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.163142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.163520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.163874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.163901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.117 qpair failed and we were unable to recover it. 00:33:13.117 [2024-07-22 18:10:17.164153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.164546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.117 [2024-07-22 18:10:17.164575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.164812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.165202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.165235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.165567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.165889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.165916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.166284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.166627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.166655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.167007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.167371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.167400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.167729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.168106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.168132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.168501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.168835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.168862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.169198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.169482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.169510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.169870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.170233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.170259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.170600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.170987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.171013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.171375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.171707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.171733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.172088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.172415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.172444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.172821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.173051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.173081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.173429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.173655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.173685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.174040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.174366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.174393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.174740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.174950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.174977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.175383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.175743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.175770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.176131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.176490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.176519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.176912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.177273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.177300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.177654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.177994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.178020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.178374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.178669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.178696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.179112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.179470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.179498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.179900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.180257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.180283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.180475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.180710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.180736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.181125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.181488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.181516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.181889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.182298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.182325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.182698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.183020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.183046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.183436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.183823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.183849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.184213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.184554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.184582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.118 qpair failed and we were unable to recover it. 00:33:13.118 [2024-07-22 18:10:17.184966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.118 [2024-07-22 18:10:17.185314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.185341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.185751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.186052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.186080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.186444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.186806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.186833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.187169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.187576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.187605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.187947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.188279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.188305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.188690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.189024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.189051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.189405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.189733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.189760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.190111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.190342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.190381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.190713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.191070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.191097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.191333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.191600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.191629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.192007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.192342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.192377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.192726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.193085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.193111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.193474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.193805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.193832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.194219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.194444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.194472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.194847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.195177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.195203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.195587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.195950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.195977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.196316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.196703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.196731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.196983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.197377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.197405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.197765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.198130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.198156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.198332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.198699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.198726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.198982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.199370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.199397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.199798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.200095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.200121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.200476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.200842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.200869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.119 [2024-07-22 18:10:17.201246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.201605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.119 [2024-07-22 18:10:17.201639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.119 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.202008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.202321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.202356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.202690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.203061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.203087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.203421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.203652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.203678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.204048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.204304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.204331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.204695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.205070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.205096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.205498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.205829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.205855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.206261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.206595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.206622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.206990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.207332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.207367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.207598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.207936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.207963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.208276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.208488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.208516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.208870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.209220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.209247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.209543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.209902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.209928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.210330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.210706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.210734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.211088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.211321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.211359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.211723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.212086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.212112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.212457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.212828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.212854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.213219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.213577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.213606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.213829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.214196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.214223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.214597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.214945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.214972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.215219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.215544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.215572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.215924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.216282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.216309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.216711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.217073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.217099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.217523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.217950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.217981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.218259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.218588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.218617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.218964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.219320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.219347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.219722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.220047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.220074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.220409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.220780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.220807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.221173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.221415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.221442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.120 qpair failed and we were unable to recover it. 00:33:13.120 [2024-07-22 18:10:17.221822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.120 [2024-07-22 18:10:17.222048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.222075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.222472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.222695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.222725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.223131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.223358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.223389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.223825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.224195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.224223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.224594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.224929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.224955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.225360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.225704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.225731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.225966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.226318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.226344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.226719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.227094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.227120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.227374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.227721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.227747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.228105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.228463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.228492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.228865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.229223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.229249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.229627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.229979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.230006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.230370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.230749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.230776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.231138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.231490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.231519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.231885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.232267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.232294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.232636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.232989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.233017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.233408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.233810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.233836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.234200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.234557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.234585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.234988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.235216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.235249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.235649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.235985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.236011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.236370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.236750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.236777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.236918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.237305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.237333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.237705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.238069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.238108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.238468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.238822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.238850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.239189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.239556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.239585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.239949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.240305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.240331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.240748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.241078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.241105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.241483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.241880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.241907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.242266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.242593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.121 [2024-07-22 18:10:17.242621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.121 qpair failed and we were unable to recover it. 00:33:13.121 [2024-07-22 18:10:17.242871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.243114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.243141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.243404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.243748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.243775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.244169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.244494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.244522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.244884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.245199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.245227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.245642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.245874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.245901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.246273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.246626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.246654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.246892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.247302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.247329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.247717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.247964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.247992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.248368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.248769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.248797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.249152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.249504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.249533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.249802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.250159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.250187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.250454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.250778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.250805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.251058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.251382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.251411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.251683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.252039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.252066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.252459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.252702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.252734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.253087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.253470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.253500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.253726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.254094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.254120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.254344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.254758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.254786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.255202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.255525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.255553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.255934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.256281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.256308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.256664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.256996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.257023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.257379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.257622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.257652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.257899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.258206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.258234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.258504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.258832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.258859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.259204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.259576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.259605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.260000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.260363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.260392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.260762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.261119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.261149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.261480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.261852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.261880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.122 [2024-07-22 18:10:17.262285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.262555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.122 [2024-07-22 18:10:17.262583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.122 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.262963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.263365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.263395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.263779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.264019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.264046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.264411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.264652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.264681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.264917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.265318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.265346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.265823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.266187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.266215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.266576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.266848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.266875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.267114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.267479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.267509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.267860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.268233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.268260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.268493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.268877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.268904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.269150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.269375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.269404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.269751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.270115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.270141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.270367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.270719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.270747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.271085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.271443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.271473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.271708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.272034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.272061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.272429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.272771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.272798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.273150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.273497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.273531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.273887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.274251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.274278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.274654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.274988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.275016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.275381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.275661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.275688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.275964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.276332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.276372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.276772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.277002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.277029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.277430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.277814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.277840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.278212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.278544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.278571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.278963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.279301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.279328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.279740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.280064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.280090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.280445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.280816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.280844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.281245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.281584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.281612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.281966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.282339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.282381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.123 qpair failed and we were unable to recover it. 00:33:13.123 [2024-07-22 18:10:17.282743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.283096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.123 [2024-07-22 18:10:17.283124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.283546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.283767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.283797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.284005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.284381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.284409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.284767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.285093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.285120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.285476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.285858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.285885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.286099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.286494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.286523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.286888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.287250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.287277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.287646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.287981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.288008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.288346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.288739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.288766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.289121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.289473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.289502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.289856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.290206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.290233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.290587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.290959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.290986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.291396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.291722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.291750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.292129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.292521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.292549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.292903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.293126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.293155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.293420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.293638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.293665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.293887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.294142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.294169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.294423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.294781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.294808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.295152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.295550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.295579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.295974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.296360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.296390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.296757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.297088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.297118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.297452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.297825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.297854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.298211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.298578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.298605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.298980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.299317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.299343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.299602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.299954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.124 [2024-07-22 18:10:17.299980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.124 qpair failed and we were unable to recover it. 00:33:13.124 [2024-07-22 18:10:17.300371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.300631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.300658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.301011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.301374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.301403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.301780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.302109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.302136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.302385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.302770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.302798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.303171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.303530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.303560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.303925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.304286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.304314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.304703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.305040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.305068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.305410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.305781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.305809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.306056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.306450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.306478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.306842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.307197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.307226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.307564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.307918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.307945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.308313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.308674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.308703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.309053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.309396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.309427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.309823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.310175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.310208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.310565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.310925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.310951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.311208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.311532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.311561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.311930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.312270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.312297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.312709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.313086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.313113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.313477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.313845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.313872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.314238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.314580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.314608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.314970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.315320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.315347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.315743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.316096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.316124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.316522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.316888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.316916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.317160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.317493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.317528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.317759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.318120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.318146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.318511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.318864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.318892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.319130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.319484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.319512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.319676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.320050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.320079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.125 qpair failed and we were unable to recover it. 00:33:13.125 [2024-07-22 18:10:17.320319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.320650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.125 [2024-07-22 18:10:17.320678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.321042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.321438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.321466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.321826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.322185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.322212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.322566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.322973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.323001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.323376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.323723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.323750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.324105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.324463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.324491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.324898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.325261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.325287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.325660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.326044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.326071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.326414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.326803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.326830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.327188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.327462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.327491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.327857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.328160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.328186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.328520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.328897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.328924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.329163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.329561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.329590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.329953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.330344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.330393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.330535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.330851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.330880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.331233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.331479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.331511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.331913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.332242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.332269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.332639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.332994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.333023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.333423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.333814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.333842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.334206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.334561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.334590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.334929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.335262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.335289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.335684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.336038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.336065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.336425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.336786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.336813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.337184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.337524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.337553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.337902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.338258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.338285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.338652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.338983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.339009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.339375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.339720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.339747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.340124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.340452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.340479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.340814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.341163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.341190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.126 qpair failed and we were unable to recover it. 00:33:13.126 [2024-07-22 18:10:17.341538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.126 [2024-07-22 18:10:17.341890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.341917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.342279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.342654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.342682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.343014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.343346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.343385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.343604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.343976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.344002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.344405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.344642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.344672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.345035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.345360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.345389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.345777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.346092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.346119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.346495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.346751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.346779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.347031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.347382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.347411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.347792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.348147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.348174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.348582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.348917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.348944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.349329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.349682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.349710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.350086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.350432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.350461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.350817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.351210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.351237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.351609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.351942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.351969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.352285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.352653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.352681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.353111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.353424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.353451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.353781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.354103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.354136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.354494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.354875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.354902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.355233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.355576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.355605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.355921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.356273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.356301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.356691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.357013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.357039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.357374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.357739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.357767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.358175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.358533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.358562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.358919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.359261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.359289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.359647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.359948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.359975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.360215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.360545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.360573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.360919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.361249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.361276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.361577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.361910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.361938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.127 qpair failed and we were unable to recover it. 00:33:13.127 [2024-07-22 18:10:17.362291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.127 [2024-07-22 18:10:17.362646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.362675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.363033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.363253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.363279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.363611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.363976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.364003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.364367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.364728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.364755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.365117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.365470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.365498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.365763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.366092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.366119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.366494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.366711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.366739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.367084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.367443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.367472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.367864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.368181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.368207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.368552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.368908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.368935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.369272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.369629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.369656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.369987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.370386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.370415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.370826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.371186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.371213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.371552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.371908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.371934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.372266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.372626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.372654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.373013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.373368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.373396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.373786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.374153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.374179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.374552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.374943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.374971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.375213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.375619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.375647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.375920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.376267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.376294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.376634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.376952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.376980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.377345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.377699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.377726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.378098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.378430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.378459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.378842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.379194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.379220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.379626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.380015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.128 [2024-07-22 18:10:17.380042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.128 qpair failed and we were unable to recover it. 00:33:13.128 [2024-07-22 18:10:17.380460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.129 [2024-07-22 18:10:17.380844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.129 [2024-07-22 18:10:17.380871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.129 qpair failed and we were unable to recover it. 00:33:13.129 [2024-07-22 18:10:17.381119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.129 [2024-07-22 18:10:17.381524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.129 [2024-07-22 18:10:17.381552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.129 qpair failed and we were unable to recover it. 00:33:13.129 [2024-07-22 18:10:17.381961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.129 [2024-07-22 18:10:17.382319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.129 [2024-07-22 18:10:17.382345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.129 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.382664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.383055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.383082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.383497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.383895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.383922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.384227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.384648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.384676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.385043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.385268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.385294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.385558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.385906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.385933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.386279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.386612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.386642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.386983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.387337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.387377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.387647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.387985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.388012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.388373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.388707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.388734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.389076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.389420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.389447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.389828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.390185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.390212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.390584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.390776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.390808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.391128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.391527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.391555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.391901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.392260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.392287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.392660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.393040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.393067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.393436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.393811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.393840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.394236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.394494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.394523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.394880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.395246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.395274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.395648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.396002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.396029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.396394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.396747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.396774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.397133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.397477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.397504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.397836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.398198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.398225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.398579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.398933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.398960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.399329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.399699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.399726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.400077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.400443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.400471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.400847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.401207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.401234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.398 qpair failed and we were unable to recover it. 00:33:13.398 [2024-07-22 18:10:17.401609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.401962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.398 [2024-07-22 18:10:17.401989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.402356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.402726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.402753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.403114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.403469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.403497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.403909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.404144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.404173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.404518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.404755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.404781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.405155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.405374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.405401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.405768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.406167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.406194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.406564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.406797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.406826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.407114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.407474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.407502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.407867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.408278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.408304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.408676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.409012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.409038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.409397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.409785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.409812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.410146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.410511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.410539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.410902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.411262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.411289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.411645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.412001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.412028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.412407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.412764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.412790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.413128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.413474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.413502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.413821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.414124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.414150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.414546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.414922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.414949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.415308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.415700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.415728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.415984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.416359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.416388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.416769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.417122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.417148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.417483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.417840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.417867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.418206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.418581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.418609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.418978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.419341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.419379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.419746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.420106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.420132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.420496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.420843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.420871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.421241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.421626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.421655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.399 [2024-07-22 18:10:17.422015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.422380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.399 [2024-07-22 18:10:17.422409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.399 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.422669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.423075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.423101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.423506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.423863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.423890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.424254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.424654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.424682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.425041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.425392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.425420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.425801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.426166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.426193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.426482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.426892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.426919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.427253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.427646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.427673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.428031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.428431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.428467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.428830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.429192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.429218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.429569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.429905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.429931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.430174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.430485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.430513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.430865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.431230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.431257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.431595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.431944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.431972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.432376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.432730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.432757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.433091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.433441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.433469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.433847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.434017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.434043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.434417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.434804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.434831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.435195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.435549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.435583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.435899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.436206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.436232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.436586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.436899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.436926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.437271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.437626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.437656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.438053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.438411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.438440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.438804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.439026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.439052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.439327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.439720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.439748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.440112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.440477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.440505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.440737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.440980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.441007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.441397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.441625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.441655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.442016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.442239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.442266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.400 qpair failed and we were unable to recover it. 00:33:13.400 [2024-07-22 18:10:17.442686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.400 [2024-07-22 18:10:17.443034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.443061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.443455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.443787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.443813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.444205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.444531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.444559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.444805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.445117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.445144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.445522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.445860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.445886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.446286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.446639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.446668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.447040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.447399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.447428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.447668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.448009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.448035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.448396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.448810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.448837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.449199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.449510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.449539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.449932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.450295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.450321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.450674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.450903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.450930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.451267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.451648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.451676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.451888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.452095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.452121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.452516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.452869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.452896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.453212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.453547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.453576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.453943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.454303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.454330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.454742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.455078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.455104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.455450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.455865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.455892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.456259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.456621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.456650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.457010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.457296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.457323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.457689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.458017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.458043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.458379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.458731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.458757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.459103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.459450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.459478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.459724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.460064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.460091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.460487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.460852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.460878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.461251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.461619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.461647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.461981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.462339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.462385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.401 [2024-07-22 18:10:17.462619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.462973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.401 [2024-07-22 18:10:17.463000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.401 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.463362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.463728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.463755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.464162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.464492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.464521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.464869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.465175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.465202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.465620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.466001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.466027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.466398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.466778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.466805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.467176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.467414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.467442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.467811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.468170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.468196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.468534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.468891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.468918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.469146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.469498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.469525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.469895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.470255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.470281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.470653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.470987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.471014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.471396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.471753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.471785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.472112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.472462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.472491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.472849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.473164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.473191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.473572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.473926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.473952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.474319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.474699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.474727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.475084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.475446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.475475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.402 [2024-07-22 18:10:17.475850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.476198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.402 [2024-07-22 18:10:17.476224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.402 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.476496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.476830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.476857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.477215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.477548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.477576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.477983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.478337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.478373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.478740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.479095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.479122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.479490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.479820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.479847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.480208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.480568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.480596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.480827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.481185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.481212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.481549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.481908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.481935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.482188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.482588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.482616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.483017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.483384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.483413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.483781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.484138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.484164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.484606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.484939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.484966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.485299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.485536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.485568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.485867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.486086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.486115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.486499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.486739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.486765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.486977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.487335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.487381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.487650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.488039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.488066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.488475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.488791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.488817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.489212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.489618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.489647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.490007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.490367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.490395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.490778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.491136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.491162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.491494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.491841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.491868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.492270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.492607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.492636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.492986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.493367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.493396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.493756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.494083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.494111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.494479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.494829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.494857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.495212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.495491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.495521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.403 qpair failed and we were unable to recover it. 00:33:13.403 [2024-07-22 18:10:17.495903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.496143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.403 [2024-07-22 18:10:17.496170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.496482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.496849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.496876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.497138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.497501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.497529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.497800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.498083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.498110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.498399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.498828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.498855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.499218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.499474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.499502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.499869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.500223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.500250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.500523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.500917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.500944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.501306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.501648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.501678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.502027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.502380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.502408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.502772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.503145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.503172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.503376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.503645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.503672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.503918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.504256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.504283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.504648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.504886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.504917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.505272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.505501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.505529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.505891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.506236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.506263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.506655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.506869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.506896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.507266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.507619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.507654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.507881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.508228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.508256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.508654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.509083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.509110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.509455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.509813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.509840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.510178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.510535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.510563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.510903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.511227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.511254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.511629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.511980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.512008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.512392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.512780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.512808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.513177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.513561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.513590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.513950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.514293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.514319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.404 [2024-07-22 18:10:17.514680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.515063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.404 [2024-07-22 18:10:17.515091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.404 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.515400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.515764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.515790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.516213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.516461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.516490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.516729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.517116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.517142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.517504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.517740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.517767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.518119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.518472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.518500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.518862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.519213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.519240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.519619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.519831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.519859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.520242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.520595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.520624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.520963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.521328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.521364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.521596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.521934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.521960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.522323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.522720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.522747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.522977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.523340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.523384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.523708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.524017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.524044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.524423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.524786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.524812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.525175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.525554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.525581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.525845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.526242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.526269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.526650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.526993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.527022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.527376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.527777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.527804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.528173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.528527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.528557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.528922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.529255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.529283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.529650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.529875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.529902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.530267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.530628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.530657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.531024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.531338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.531377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.531791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.532121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.532148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.532510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.532834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.532860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.533246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.533621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.533649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.534011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.534325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.534360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.534738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.535093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.535119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.405 qpair failed and we were unable to recover it. 00:33:13.405 [2024-07-22 18:10:17.535458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.405 [2024-07-22 18:10:17.535691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.535722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.536092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.536472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.536502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.536863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.537260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.537287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.537644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.537925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.537951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.538291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.538613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.538641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.538988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.539300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.539327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.539709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.540057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.540084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.540492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.540826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.540852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.541246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.541584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.541613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.541959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.542315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.542343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.542601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.542842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.542870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.543258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.543596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.543625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.544019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.544360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.544394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.544786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.545120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.545147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.545515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.545897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.545924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.546311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.546679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.546708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.547064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.547450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.547479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.547854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.548236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.548265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.548496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.548855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.548882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.549234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.549655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.549685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.549918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.550167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.550196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.550563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.550947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.550973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.551331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.551581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.551608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.551995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.552363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.552392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.552796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.553156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.553183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.553533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.553781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.553808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.554121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.554491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.554519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.406 [2024-07-22 18:10:17.554922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.555251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.406 [2024-07-22 18:10:17.555278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.406 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.555655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.556013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.556041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.556430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.556804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.556830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.557165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.557508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.557537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.557941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.558276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.558303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.558677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.559025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.559052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.559397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.559746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.559773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.560108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.560515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.560542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.560912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.561326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.561364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.561768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.562026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.562053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.562427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.562811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.562838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.563164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.563525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.563553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.563911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.564229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.564256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.564643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.565036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.565063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.565451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.565803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.565830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.566194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.566559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.566587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.566926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.567236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.567262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.567614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.567966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.567993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.568378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.568750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.568777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.569113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.569454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.569482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.569857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.570246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.570272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.570657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.571029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.571057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.571471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.571827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.571853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.572187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.572545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.572573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.572902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.573222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.573249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.407 qpair failed and we were unable to recover it. 00:33:13.407 [2024-07-22 18:10:17.573624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.407 [2024-07-22 18:10:17.573970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.573996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.574335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.574730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.574758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.575090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.575453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.575482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.575842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.576194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.576220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.576596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.576909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.576936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.577287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.577661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.577691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.578055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.578384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.578412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.578795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.579157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.579184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.579563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.579949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.579975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.580311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.580667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.580694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.580937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.581343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.581390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.581788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.582038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.582073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.582454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.582840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.582867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.583277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.583661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.583689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.584046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.584276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.584306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.584653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.584999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.585025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.585483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.585879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.585907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.586280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.586513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.586541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.586870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.587100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.587127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.587497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.587757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.587784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.588136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.588463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.588491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.588860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.589223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.589256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.589614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.589969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.589996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.590372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.590738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.590764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.591158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.591538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.591567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.591919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.592282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.592308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.592671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.593059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.593085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.408 [2024-07-22 18:10:17.593457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.593816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.408 [2024-07-22 18:10:17.593843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.408 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.594221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.594581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.594609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.594950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.595315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.595342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.595629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.595979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.596007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.596402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.596751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.596779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.597148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.597508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.597537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.597896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.598248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.598277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.598534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.598908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.598939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.599342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.599724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.599752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.600114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.600383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.600413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.600818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.601162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.601188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.601562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.601803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.601830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.602188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.602544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.602573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.602936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.603251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.603279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.603667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.604031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.604059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.604423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.604719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.604746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.605122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.605425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.605453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.605848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.606202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.606229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.606598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.606953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.606980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.607358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.607744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.607772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.608160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.608514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.608543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.608898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.609126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.609153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.609425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.609801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.609831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.610190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.610545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.610575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.610974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.611304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.611331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.611697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.612084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.612113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.612361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.612723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.612751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.613117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.613246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.613277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.409 [2024-07-22 18:10:17.613657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.614031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.409 [2024-07-22 18:10:17.614061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.409 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.614394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.614733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.614762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.615121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.615482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.615511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.615874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.616217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.616245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.616519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.616881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.616908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.617276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.617699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.617727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.618093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.618449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.618478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.618857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.619099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.619126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.619530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.619862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.619889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.620229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.620584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.620612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.620962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.621293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.621319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.621676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.622026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.622052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.622419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.622745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.622771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.623146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.623374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.623403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.623815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.624051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.624081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.624442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.624805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.624832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.625149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.625492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.625520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.625902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.626260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.626294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.626679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.626915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.626942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.627267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.627615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.627645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.628046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.628341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.628380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.628749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.629106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.629134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.629492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.629864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.629892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.630254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.630567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.630595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.630987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.631237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.631264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.631645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.631947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.631975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.632219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.632550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.632579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.632938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.633152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.633182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.633546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.633926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.633954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.410 qpair failed and we were unable to recover it. 00:33:13.410 [2024-07-22 18:10:17.634320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.410 [2024-07-22 18:10:17.634727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.634756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.635117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.635479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.635507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.635872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.636103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.636135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.636480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.636817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.636845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.637199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.637565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.637592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.638014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.638329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.638382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.638788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.639108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.639134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.639504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.639862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.639888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.640245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.640479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.640511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.640913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.641274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.641302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.641588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.641945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.641971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.642210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.642550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.642577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.642941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.643297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.643324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.643733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.643960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.643986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.644358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.644689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.644716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.645115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.645514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.645544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.645905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.646226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.646252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.646646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.646947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.646973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.647341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.647709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.647736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.648088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.648454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.648481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.648742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.648961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.648988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.649372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.649806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.649834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.650189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.650548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.650577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.411 [2024-07-22 18:10:17.650973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.651191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.411 [2024-07-22 18:10:17.651218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.411 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.651575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.651920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.651947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.652316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.652747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.652775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.653133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.653473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.653501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.653893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.654133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.654160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.654526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.654905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.654932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.655297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.655667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.655696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.655948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.656313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.656341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.656748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.657078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.657105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.657473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.657820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.657846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.658088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.658425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.658454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.658815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.659172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.659200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.659568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.659964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.659991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.660249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.660612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.660641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.412 [2024-07-22 18:10:17.660989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.661219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.412 [2024-07-22 18:10:17.661246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.412 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-22 18:10:17.661614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.661853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.661879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-22 18:10:17.662255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.662620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.662655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-22 18:10:17.663035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.663416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.663445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-22 18:10:17.663828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.664194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.664220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-22 18:10:17.664595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.664925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.413 [2024-07-22 18:10:17.664952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.413 qpair failed and we were unable to recover it. 00:33:13.413 [2024-07-22 18:10:17.665251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.665620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.665649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.665991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.666317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.666343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.666783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.667160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.667187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.667565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.667893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.667920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.668153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.668499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.668528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.668918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.669153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.669182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.669551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.669932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.669959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.670314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.670678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.670707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.671111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.671474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.671503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.671905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.672224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.672251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.672627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.672981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.673008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.673381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.673731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.673758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.681 qpair failed and we were unable to recover it. 00:33:13.681 [2024-07-22 18:10:17.674003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.681 [2024-07-22 18:10:17.674299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.674326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.674616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.674968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.674995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.675334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.675723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.675750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.675983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.676269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.676296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.676675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.677086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.677114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.677517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.677839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.677866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.678092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.678325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.678367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.678749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.679114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.679141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.679488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.679859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.679885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.680296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.680682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.680709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.681026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.681389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.681419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.681812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.682159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.682186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.682564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.682886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.682912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.683270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.683690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.683720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.684049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.684406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.684434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.684779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.685156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.685182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.685535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.685868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.685896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.686242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.686597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.686626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.686964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.687336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.687373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.687776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.688123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.688154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.688513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.688891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.688917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.689262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.689521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.689550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.689908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.690259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.690287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.690652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.691012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.691039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.691477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.691828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.691855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.692072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.692448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.692477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.692891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.693234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.693260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.693664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.693885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.693915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.694257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.694624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.694653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.694988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.695371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.695400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.695796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.696099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.696126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.696484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.696736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.696764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.697105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.697418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.697447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.697832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.698193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.698219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.698557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.698924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.698952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.699343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.699727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.699760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.700150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.700472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.700500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.700874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.701141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.701168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.701539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.701874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.701900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.702287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.702671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.702699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.703054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.703416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.703444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.703691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.704032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.704061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.704301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.704659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.704687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.705064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.705419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.705447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.705826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.706187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.682 [2024-07-22 18:10:17.706213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.682 qpair failed and we were unable to recover it. 00:33:13.682 [2024-07-22 18:10:17.706608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.706940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.706974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.707345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.707732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.707760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.708118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.708473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.708501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.708868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.709192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.709218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.709573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.709915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.709941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.710308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.710571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.710598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.710848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.711185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.711212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.711573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.711932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.711958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.712370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.712723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.712750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.713099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.713459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.713489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.713856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.714213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.714240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.714631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.714962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.714990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.715327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.715722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.715750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.716007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.716373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.716402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.716832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.717065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.717093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.717454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.717820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.717847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.718245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.718627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.718654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.718996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.719361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.719390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.719756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.720073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.720099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.720498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.720854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.720880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.721284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.721647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.721676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.722083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.722479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.722508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.722838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.723177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.723204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.723569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.723948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.723976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.724314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.724713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.724741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.725100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.725490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.725518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.725721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.726098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.726125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.726463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1889824 Killed "${NVMF_APP[@]}" "$@" 00:33:13.683 [2024-07-22 18:10:17.726855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.726884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.727294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 18:10:17 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:33:13.683 [2024-07-22 18:10:17.727608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.727636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 18:10:17 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:13.683 18:10:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:13.683 [2024-07-22 18:10:17.728051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 18:10:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:13.683 [2024-07-22 18:10:17.728371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.728401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:33:13.683 [2024-07-22 18:10:17.728801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.729071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.729098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.729449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.729808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.729836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.730185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.730509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.730537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.730934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.731311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.731339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.731747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.732088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.732114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.732451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.732698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.732725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.733083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.733444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.733476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.733869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.734234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.734260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.734642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.734977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.735005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 [2024-07-22 18:10:17.735376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.735726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.735756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 18:10:17 -- nvmf/common.sh@469 -- # nvmfpid=1890572 00:33:13.683 [2024-07-22 18:10:17.736115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 18:10:17 -- nvmf/common.sh@470 -- # waitforlisten 1890572 00:33:13.683 [2024-07-22 18:10:17.736506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 [2024-07-22 18:10:17.736537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.683 qpair failed and we were unable to recover it. 00:33:13.683 18:10:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:13.683 18:10:17 -- common/autotest_common.sh@819 -- # '[' -z 1890572 ']' 00:33:13.683 18:10:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.683 [2024-07-22 18:10:17.736893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.683 18:10:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:13.684 [2024-07-22 18:10:17.737248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.737277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 18:10:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.684 18:10:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:13.684 [2024-07-22 18:10:17.737653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 18:10:17 -- common/autotest_common.sh@10 -- # set +x 00:33:13.684 [2024-07-22 18:10:17.737992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.738022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.739253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.739624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.739659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.740053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.740275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.740303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.740587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.740942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.740970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.741198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.741540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.741570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.741782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.742132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.742161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.742508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.742882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.742911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.743277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.743559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.743591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.743932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.744310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.744339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.744727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.745053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.745082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.745437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.745704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.745737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.745984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.746210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.746240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.746483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.746841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.746868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.747270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.747622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.747652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.748034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.748369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.748400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.748647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.749000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.749029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.749270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.749632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.749668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.750073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.750308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.750337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.750762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.750986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.751013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.751390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.751513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.751541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.751650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.751954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.751982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.752338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.752617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.752645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.752884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.753233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.753262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.753661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.754009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.754038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.754284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.754641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.754671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.755069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.755435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.755464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.755711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.756058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.756086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.756317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.756702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.756731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.756981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.757331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.757376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.757764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.758076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.758104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.758337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.758715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.758744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.758993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.759222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.759249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.759501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.759849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.759879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.760269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.760418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.760447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.760573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.760707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.760736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.760956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.761275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.761302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.761707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.761939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.761966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.762377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.762766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.684 [2024-07-22 18:10:17.762793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.684 qpair failed and we were unable to recover it. 00:33:13.684 [2024-07-22 18:10:17.763163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.763501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.763530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.763858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.764210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.764240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.764580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.767132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.767208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.767637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.768020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.768050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.768401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.768755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.768783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.769134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.769539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.769568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.769799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.770142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.770169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.770524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.770765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.770791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.771146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.771493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.771522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.771901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.772239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.772266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.774191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.774628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.774665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.775033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.775375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.775406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.775743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.776082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.776110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.776465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.776838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.776866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.777226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.777535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.777564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.777803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.778162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.778190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.778523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.778872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.778899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.779266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.779633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.779662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.781573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.781990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.782025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.782438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.782815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.782843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.783093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.783471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.783500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.783926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.784292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.784320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.784601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.784996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.785024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.785434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.785822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.785849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.786247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.786639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.786669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.786863] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:13.685 [2024-07-22 18:10:17.786936] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.685 [2024-07-22 18:10:17.787025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.787418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.787448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.787797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.788030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.788058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.788457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.788797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.788825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.789175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.789504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.789540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.789893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.790242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.790271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.790624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.790852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.790881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.791280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.791598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.791628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.791851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.792088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.792115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.792362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.792713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.792741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.793498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.793883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.793916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.794319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.794686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.794717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.795116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.795512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.795541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.795947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.796289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.796317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.796625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.798505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.798567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.798971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.799369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.799399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.799782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.800123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.800153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.800393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.800750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.800780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.801151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.801378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.801408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.801811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.802033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.802060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.802437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.802815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.802842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.803213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.803541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.803571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.803951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.804218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.804245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.804625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.804945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.804973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.685 qpair failed and we were unable to recover it. 00:33:13.685 [2024-07-22 18:10:17.805296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.685 [2024-07-22 18:10:17.805674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.805702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.806053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.806394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.806425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.806786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.807144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.807172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.807555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.807812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.807839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.808210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.808551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.808581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.808818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.809215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.809243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.809591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.809979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.810007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.810373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.810610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.810638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.810993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.811378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.811408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.811781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.812090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.812118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.812509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.812855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.812884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.813221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.813620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.813650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.814036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.814381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.814412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.814769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.815132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.815162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.815493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.815816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.815844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.816234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.816576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.816607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.817008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.817238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.817267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.817456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.817833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.817861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.818101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.818426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.818455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.818641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.818912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.818939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.819282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.819692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.819721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.819972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.820319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.820347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.820604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.820986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.821013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.821377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.821625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.821656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.822017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.822363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.822393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.822780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.823140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.823169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.823531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.823886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.823914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.824211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.824522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.824550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.824785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.825129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.825155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.825483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.825846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.825874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.826242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.686 [2024-07-22 18:10:17.826635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.826664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.827047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.827330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.827377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.827768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.827997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.828024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.828371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.828762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.828792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.829148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.829556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.829585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.829933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.830359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.830390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.830765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.831099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.831127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.831374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.831726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.831755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.832031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.832380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.832408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.832754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.833130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.833156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.833551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.833876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.833904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.834256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.834547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.834581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.834853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.835202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.835229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.835486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.835833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.835861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.836216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.836544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.836574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.836922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.837244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.837271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.837540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.837881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.837909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.838277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.838601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.838631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.839020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.839403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.839430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.839834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.840043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.840074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.840302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.840730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.840761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.686 qpair failed and we were unable to recover it. 00:33:13.686 [2024-07-22 18:10:17.841177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.686 [2024-07-22 18:10:17.841456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.841484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.841851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.842175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.842201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.842549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.842896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.842923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.843278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.843663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.843692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.844015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.844247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.844274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.844525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.844835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.844861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.845102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.845526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.845554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.845919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.846270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.846296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.846670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.847027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.847054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.847400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.847761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.847790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.848134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.848372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.848403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.848756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.849167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.849193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.849575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.849991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.850018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.850253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.850602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.850632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.850864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.851272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.851299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.851673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.852006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.852035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.852408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.852729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.852757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.853119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.853464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.853494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.853848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.854204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.854231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.854478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.854732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.854759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.855117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.855364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.855392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.855787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.856134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.856162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.856574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.856924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.856951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.857305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.857667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.857696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.858028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.858374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.858404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.858795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.859138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.859167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.859539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.859899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.859925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.860284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.860612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.860640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.860991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.861331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.861371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.861738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.862072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.862099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.862323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.862727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.862755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.863076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.863469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.863500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.863855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.864199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.864226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.864565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.864903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.864930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.865207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.865462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.865491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.865738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.865989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.866017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.866371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.866716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.866742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.867028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.867298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.867325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.867733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.867968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.867998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.868344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.868711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.868739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.869088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.869420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.869448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.869807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.870153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.870187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.870552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.870887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.870915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.871256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.871596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.871625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.871942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.872268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.872296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.872704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.873073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.687 [2024-07-22 18:10:17.873101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.687 qpair failed and we were unable to recover it. 00:33:13.687 [2024-07-22 18:10:17.873423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.873804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.873831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.874141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.874492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.874521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.874873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.875254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.875282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.875651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.875993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.876023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.876252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.876635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.876664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.877018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.877271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.877298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.877707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.878040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.878068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.878422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.878672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.878699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.878968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.879337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.879374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.879749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.880090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.880117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.880463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.880833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.880861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.881200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.881537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.881565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.881939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.882334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.882374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.882689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.882810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.882838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.883249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.883620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.883651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.884003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.884337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.884377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.884746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.885057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.885086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.885416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.885753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.885780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.886125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.886313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.886339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.886736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.887083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.887110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.887456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.887816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.887843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.888088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.888327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.888367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.888630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.888994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.889022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.889373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.889744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.889773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.890126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.890474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.890503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.890921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.891267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.891294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.891669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.892010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.892037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.892391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.892644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.892671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.892831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.893179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.893206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.893545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.893771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.893801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.894163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.894511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.894540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.894892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.895273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.895299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.895914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.896269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.896297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.896671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.897034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.897060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.897310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.897702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.897731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.898083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.898431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.898466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.898800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.899159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.899187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.899445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.899798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.899825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.900164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.900488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.900516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.900876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.901207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.901234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.901476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.901827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.901854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.902238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.902577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.902606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.902970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.903311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.903338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.903724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.904071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.904098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.904453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.904695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.904721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.905081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.905436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.905464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.905739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.906050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.906083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.906458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.906793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.906823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.907065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.907394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.907423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.907778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.908131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.908159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.688 qpair failed and we were unable to recover it. 00:33:13.688 [2024-07-22 18:10:17.908513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.688 [2024-07-22 18:10:17.908903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.908931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.909278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.909641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.909670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.909975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.910310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.910338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.910714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.911059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.911087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.911334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.911606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.911637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.911989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.912345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.912392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.912773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.912983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.913011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.913382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.913736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.913764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.914143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.914484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.914512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.914886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.915274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.915302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.915599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.915808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.915836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.916095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.916417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.916447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.916815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.917133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.917161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.917539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.917945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.917973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.918322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.918669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.918698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.919035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.919383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.919411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.919648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.919986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.920013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.920390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.920623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.920651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.921011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.921369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.921398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.921751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.922108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.922135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.922479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.922835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.922862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.923104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.923430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.923458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.923851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.924201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.924229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.924565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.924920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.924946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.925315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.925695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.925724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.926142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.926508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.926537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.926841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.927164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.927190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.927433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.927805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.927832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.928153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.928491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.928520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.928915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.929185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.929211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.929610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.929936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.929963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.930317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.930599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.930627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.930980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.931322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.931362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.931727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.932068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.932095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.932443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.932672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.932703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.932936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.933304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.933332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.933564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.933918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.933946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.933990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:13.689 [2024-07-22 18:10:17.934294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.934566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.934595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.934956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.935324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.935367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.935747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.936094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.936122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.936468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.936823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.936851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.937095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.937449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.937478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.937709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.938052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.938078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.938438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.938825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.938852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.939221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.939445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.939474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.939823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.940168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.940195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.940546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.940917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.940944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.941243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.941574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.941603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.689 qpair failed and we were unable to recover it. 00:33:13.689 [2024-07-22 18:10:17.941956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.942343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.689 [2024-07-22 18:10:17.942385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.942657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.942970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.942996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.943247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.943637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.943666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.944012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.944260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.944288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.944509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.944859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.944887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.945261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.945638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.945666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.945986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.946313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.946342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.946745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.947090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.947117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.947532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.947847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.947874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.948271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.948639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.948670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.690 [2024-07-22 18:10:17.948981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.949238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.690 [2024-07-22 18:10:17.949265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.690 qpair failed and we were unable to recover it. 00:33:13.959 [2024-07-22 18:10:17.949641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.959 [2024-07-22 18:10:17.949987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.959 [2024-07-22 18:10:17.950015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.959 qpair failed and we were unable to recover it. 00:33:13.959 [2024-07-22 18:10:17.950382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.959 [2024-07-22 18:10:17.950607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.959 [2024-07-22 18:10:17.950633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.959 qpair failed and we were unable to recover it. 00:33:13.959 [2024-07-22 18:10:17.950960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.959 [2024-07-22 18:10:17.951389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.959 [2024-07-22 18:10:17.951419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.959 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.951801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.952122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.952148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.952508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.952848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.952874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.953218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.953554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.953583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.953959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.954298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.954325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.954620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.954990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.955017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.955373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.955766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.955799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.956157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.956422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.956450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.956799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.957021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.957047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.957401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.957799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.957828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.958251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.958619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.958647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.958869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.959220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.959247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.959671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.960003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.960030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.960443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.960701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.960728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.961072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.961455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.961484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.961738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.962128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.962156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.962475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.962832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.962859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.963214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.963609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.963638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.963977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.964306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.964333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.964725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.964979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.965005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.965174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.965511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.965541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.965891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.966230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.966258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.966623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.966953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.966980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.967326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.967718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.967745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.968123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.968393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.968420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.968824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.969157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.969185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.969626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.969974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.970001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.970304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.970710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.970738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.971088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.971434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.971463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.960 qpair failed and we were unable to recover it. 00:33:13.960 [2024-07-22 18:10:17.971835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.960 [2024-07-22 18:10:17.972182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.972209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.972581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.972950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.972978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.973363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.973638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.973666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.974021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.974340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.974389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.974780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.975137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.975164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.975361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.975769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.975797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.976140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.976510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.976539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.976909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.977261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.977288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.977674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.978042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.978070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.978420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.978772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.978799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.978946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.979327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.979364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.979712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.980064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.980092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.980344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.980613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.980642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.981011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.981270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.981301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.981668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.982039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.982067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.982440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.982803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.982831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.983185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.983415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.983442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.983804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.984138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.984164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.984429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.984798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.984826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.985140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.985498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.985527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.985876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.986227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.986254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.986627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.986963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.986990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.987389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.987844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.987871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.988083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.988385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.988413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.988763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.989110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.989138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.989565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.989696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.989723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.990098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.990333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.990376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.990735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.990952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.990981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.961 [2024-07-22 18:10:17.991335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.991602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.961 [2024-07-22 18:10:17.991636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.961 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.992091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.992427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.992456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.992914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.993142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.993172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.993540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.993900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.993928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.994291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.994524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.994558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.994813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.995179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.995206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.995584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.995956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.995984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.996206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.996472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.996502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.996863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.997184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.997212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.997447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.997677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.997703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.998072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.998334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.998373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.998768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.999090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.999118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:17.999497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.999863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:17.999891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.000183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.000530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.000559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.000800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.001140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.001167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.001519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.001719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.001745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.002083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.002399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.002428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.002770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.003093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.003121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.003438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.003787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.003814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.004059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.004395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.004425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.004811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.005156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.005185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.005533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.005877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.005906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.006262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.006493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.006521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.006886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.007127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.007156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.007396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.007759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.007787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.008122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.008503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.008532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.008778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.008995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.009027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.009400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.009787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.009814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.010185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.010493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.010520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.962 qpair failed and we were unable to recover it. 00:33:13.962 [2024-07-22 18:10:18.010941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.011293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.962 [2024-07-22 18:10:18.011320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.011646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.012009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.012037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.012487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.012741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.012767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.013027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.013417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.013445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.013804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.014029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.014056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.014383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.014626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.014652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.015009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.015338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.015382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.015755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.015980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.016008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.016385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.016742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.016770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.017001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.017370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.017399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.017782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.017985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.018012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.018269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.018642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.018672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.019021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.019447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.019477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.019837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.020219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.020246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.020576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.020938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.020967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.021289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.021703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.021733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.022070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.022284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.022311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.022716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.022945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.022973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.023213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.023616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.023647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.023983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.024307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.024335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.024651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.024926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.024952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.025298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.025564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.025592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.025968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.026206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.026240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.026582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.026902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.026929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.027316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.027583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.027611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.027982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.028322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.028361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.028757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.029047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.029075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.029470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.029698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.029727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.030068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.030400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.963 [2024-07-22 18:10:18.030429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.963 qpair failed and we were unable to recover it. 00:33:13.963 [2024-07-22 18:10:18.030809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.031078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.031105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.031343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.031700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.031727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.032046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.032398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.032427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.032798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.033151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.033178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.033548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.033814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.033841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.033981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.034237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.034266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.034635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.034981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.035010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.035246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.035584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.035612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.035966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.036304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.036333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.036735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.037065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.037092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.037455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.037804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.037832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.038153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.038508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.038536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.038948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.039190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.039216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.039642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.039995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.040024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.040387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.040654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.040681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.041031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.041378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.041407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.041803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.041987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.042014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.042380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.042721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.042756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.043098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.043329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.043370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.043723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.044070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.044098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.044463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.044798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.044825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.045097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.045425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.045455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.045777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.046123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.046151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.964 [2024-07-22 18:10:18.046418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.046777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.964 [2024-07-22 18:10:18.046806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.964 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.047154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.047508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.047537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.047869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.048173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.048201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.048557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.048929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.048957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.049320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.049680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.049709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.050048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.050408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.050437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.050818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.051166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.051193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.051523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.051862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.051889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.052258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.052519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.052547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.052932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.053199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.053228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.053584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.053811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.053837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.054165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.054523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.054552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.054908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.055225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.055252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.055611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.055957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.055984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.056336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.056693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.056721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.057058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.057397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.057426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.057771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.058126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.058154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.058524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.058880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.058908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.059258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.059570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.059600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.059926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.060277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.060306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.060693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.061037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.061066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.061310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.061655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.061690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.062022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.062391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.062421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.062742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.063021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.063048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.063284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.063640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.063668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.064018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.064373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.064403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.064654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.065042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.065069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.065419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.065703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.065731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.066091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.066427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.066466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.965 qpair failed and we were unable to recover it. 00:33:13.965 [2024-07-22 18:10:18.066835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.067248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.965 [2024-07-22 18:10:18.067276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.067627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.067865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.067891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.068126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.068528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.068563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.068938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.069375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.069403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.069665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.069984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.070012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.070331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.070632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.070659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.071029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.071370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.071399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.071779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.072140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.072169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.072563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.072899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.072926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.073197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.073535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.073563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.073868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.074201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.074227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.074575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.074940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.074968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.075374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.075757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.075786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.076143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.076472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.076499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.076843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.077190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.077216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.077581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.077902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.077929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.078296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.078670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.078699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.079021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.079356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.079385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.079727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.080104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.080132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.080440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.080790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.080817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.081174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.081464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.081492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.081851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.082211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.082238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.082486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.082765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.082792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.083154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.083487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.083516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.083881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.084206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.084234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.084643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.084875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.084902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.085220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.085465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.085493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.085847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.086077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.086104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.086482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.086850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.086878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.966 qpair failed and we were unable to recover it. 00:33:13.966 [2024-07-22 18:10:18.087237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.966 [2024-07-22 18:10:18.087526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.087554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.087908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.088286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.088313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.088764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.089137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.089164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.089546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.089897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.089926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.090177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.090597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.090627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.091047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.091417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.091445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.091866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.092195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.092222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.092578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.092930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.092957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.093280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.093639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.093669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.094023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.094340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.094381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.094640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.094853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.094881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.095241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.095474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.095502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.095849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.096175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.096202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.096565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.096889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.096917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.097345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.097701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.097729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.098056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.098409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.098438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.098803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.099160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.099187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.099425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.099851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.099878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.100297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.100533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.100562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.100925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.101289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.101316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.101695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.101863] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:13.967 [2024-07-22 18:10:18.102049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.102077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.102189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.967 [2024-07-22 18:10:18.102213] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.967 [2024-07-22 18:10:18.102233] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.967 [2024-07-22 18:10:18.102409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.102439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:33:13.967 [2024-07-22 18:10:18.102680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.102706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.102671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:33:13.967 [2024-07-22 18:10:18.102821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:33:13.967 [2024-07-22 18:10:18.102829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:33:13.967 [2024-07-22 18:10:18.103116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.103485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.103514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.103864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.104203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.104230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.104500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.104872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.104899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.105261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.105518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.105546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.105899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.106292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.967 [2024-07-22 18:10:18.106320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.967 qpair failed and we were unable to recover it. 00:33:13.967 [2024-07-22 18:10:18.106700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.107025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.107052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.107379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.107784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.107811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.108183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.108409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.108438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.108830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.109093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.109119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.109482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.109834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.109861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.110238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.110593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.110628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.110988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.111319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.111347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.111647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.111935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.111962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.112155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.112479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.112508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.112892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.113247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.113277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.113647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.114009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.114038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.114398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.114755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.114786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.115152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.115372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.115400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.115711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.116072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.116099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.116467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.116710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.116737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.117143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.117484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.117514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.117911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.118280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.118307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.118574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.118940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.118967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.119319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.119596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.119623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.119850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.120196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.120223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.120622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.120832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.120859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.121236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.121580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.121608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.121953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.122304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.122333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.122717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.123148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.123174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.123545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.123875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.123911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.124272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.124505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.124537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.124892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.125133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.125160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.125434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.125669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.125695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.968 [2024-07-22 18:10:18.126058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.126401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.968 [2024-07-22 18:10:18.126431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.968 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.126810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.127158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.127185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.127446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.127799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.127826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.128089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.128448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.128477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.128840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.129069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.129096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.129442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.129814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.129843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.130104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.130477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.130506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.130830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.131179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.131208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.131575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.131924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.131955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.132177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.132490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.132519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.132969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.133204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.133232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.133585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.133811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.133838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.134085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.134360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.134390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.134634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.134979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.135009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.135374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.135722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.135750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.136121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.136465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.136494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.136860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.137116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.137142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.137474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.137817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.137845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.138209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.138589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.138618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.138965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.139301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.139330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.139520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.139771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.139798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.140180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.140524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.140556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.140930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.141289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.141317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.141692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.142034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.142062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.142402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.142782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.142810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.969 [2024-07-22 18:10:18.143157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.143499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.969 [2024-07-22 18:10:18.143528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.969 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.143858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.144092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.144123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.144491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.144672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.144700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.145058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.145396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.145432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.145859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.146209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.146237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.146393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.146775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.146803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.147104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.147304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.147332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.147567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.147890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.147917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.148243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.148626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.148655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.148959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.149305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.149334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.149617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.149839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.149869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.150235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.150585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.150613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.150975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.151198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.151228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.151631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.151847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.151875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.152199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.152321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.152347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.152481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.152752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.152781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.153182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.153523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.153553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.153908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.154134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.154160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.154558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.154900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.154926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.155303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.155649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.155678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.156032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.156391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.156420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.156829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.157037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.157064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.157250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.157498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.157527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.157757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.158108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.158135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.158380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.158718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.158744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.159065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.159290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.159316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.159659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.160007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.160035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.160294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.160694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.160723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.970 [2024-07-22 18:10:18.161075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.161419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.970 [2024-07-22 18:10:18.161448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.970 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.161804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.162142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.162172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.162502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.162859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.162886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.163047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.163326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.163363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.163627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.163857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.163887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.164249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.164515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.164543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.164787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.165142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.165170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.165395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.165762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.165788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.166189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.166522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.166550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.166738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.167105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.167133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.167378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.167605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.167632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.167978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.168176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.168203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.168573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.168976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.169003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.169245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.169577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.169606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.169954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.170149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.170176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.170505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.170821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.170848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.171213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.171544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.171573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.171938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.172145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.172173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.172524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.172915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.172941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.173153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.173482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.173510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.173886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.174254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.174281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.174550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.174855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.174883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.175115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.175419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.175447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.175657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.175843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.175869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.176298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.176632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.176660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.971 [2024-07-22 18:10:18.176855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.177106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.971 [2024-07-22 18:10:18.177134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.971 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.177502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.177885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.177919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.178291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.178386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.178412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.178813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.179034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.179060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.179319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.179696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.179725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.179845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.180154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.180182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.180540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.180766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.180794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.181186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.181530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.181559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.181803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.182146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.182172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.182529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.182862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.182890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.183249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.183549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.183577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.183944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.184273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.184299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.184705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.184950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.184977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.185209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.185540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.185568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.185838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.185964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.185995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.186256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.186474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.186502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.186866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.187220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.187248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.187478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.187696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.187723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.188139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.188500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.188528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.188795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.189181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.189209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.189450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.189676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.189702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.190046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.190369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.190400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.190741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.190936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.190963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.191302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.191607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.191636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.191897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.192236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.192263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.192602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.192972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.193000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.972 qpair failed and we were unable to recover it. 00:33:13.972 [2024-07-22 18:10:18.193392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.193756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.972 [2024-07-22 18:10:18.193784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.194139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.194367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.194396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.194645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.194982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.195010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.195358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.195597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.195624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.195982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.196324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.196377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.196728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.197072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.197099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.197486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.197829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.197855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.198196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.198533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.198562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.198929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.199157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.199185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.199523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.199726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.199752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.200148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.200449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.200476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.200848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.201051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.201078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.201217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.201572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.201600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.201959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.202283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.202318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.202717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.203054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.203080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.203427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.203637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.203663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.203904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.204152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.204179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.204553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.204904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.204931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.205152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.205461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.205488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.205833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.206045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.206070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.206315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.206720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.206748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.206950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.207322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.207359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.207718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.208061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.208087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.208450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.208786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.208813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.209186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.209518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.209547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.209907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.210136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.210162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.973 qpair failed and we were unable to recover it. 00:33:13.973 [2024-07-22 18:10:18.210401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.973 [2024-07-22 18:10:18.210618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.210650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.211001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.211215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.211241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.211594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.211929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.211955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.212364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.212759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.212786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.213168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.213521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.213548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.213952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.214145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.214173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.214393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.214602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.214629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.214948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.215296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.215323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.215538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.215897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.215923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.216036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.216388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.216417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.216740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.217067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.217105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.217503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.218018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.218051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.218272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.218635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.218664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.219021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.219390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.219418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.219791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.219986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.220013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.220306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.220664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.220693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.221081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.221311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.221338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.221690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.221928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.221954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.222258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.222597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.222626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.222980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.223327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.223364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.223587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.223821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.223847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.224148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.224469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.974 [2024-07-22 18:10:18.224498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.974 qpair failed and we were unable to recover it. 00:33:13.974 [2024-07-22 18:10:18.224844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.975 [2024-07-22 18:10:18.225222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.975 [2024-07-22 18:10:18.225249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.975 qpair failed and we were unable to recover it. 00:33:13.975 [2024-07-22 18:10:18.225497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.975 [2024-07-22 18:10:18.225859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.975 [2024-07-22 18:10:18.225885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.975 qpair failed and we were unable to recover it. 00:33:13.975 [2024-07-22 18:10:18.226243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.975 [2024-07-22 18:10:18.226595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.975 [2024-07-22 18:10:18.226623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:13.975 qpair failed and we were unable to recover it. 00:33:13.975 [2024-07-22 18:10:18.226972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.227231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.227257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.227594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.227826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.227851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.228124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.228468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.228495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.228870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.229274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.229299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.229676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.230043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.230068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.230446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.230832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.230858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.231259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.231624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.231651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.231893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.232096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.232123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.232515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.232770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.232797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.232918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.233233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.233261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.233631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.233971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.233998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.234243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.234590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.234620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.234983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.235326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.235364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.235728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.236089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.236117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.236377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.236750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.236778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.237141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.237499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.237528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.237783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.238149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.238177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.244 [2024-07-22 18:10:18.238553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.238905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.244 [2024-07-22 18:10:18.238932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.244 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.239305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.239631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.239661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.240019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.240385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.240413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.240651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.240857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.240884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.241119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.241566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.241594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.241985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.242372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.242401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.242764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.243109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.243136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.243468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.243832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.243858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.244131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.244530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.244558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.244918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.245156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.245183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.245410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.245615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.245641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.246028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.246362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.246391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.246723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.246961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.246990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.247344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.247745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.247772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.248125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.248464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.248493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.248818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.249166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.249193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.249559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.249772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.249798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.250198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.250505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.250533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.250744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.251078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.251106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.251341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.251655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.251688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.252043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.252149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.252174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.252501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.252848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.252876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.253001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.253317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.253344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.253714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.253930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.253956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.254341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.254694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.254721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.254930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.255145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.255171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.255528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.255875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.255901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.256262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.256627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.256656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.257032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.257264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.257290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.257654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.257978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.258005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.258273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.258379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.258409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.258667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.258949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.258977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.259310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.259705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.259733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.260001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.260262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.260289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.260615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.260966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.260993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.261207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.261562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.261590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.261941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.262038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.262064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.262328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.262559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.262587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.262936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.263310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.263337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.263577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.263929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.263955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.264174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.264529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.264558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.264882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.265198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.265225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.265482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.265707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.265733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.266131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.266480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.266514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.266888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.267218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.267244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.267573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.267913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.267940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.245 qpair failed and we were unable to recover it. 00:33:14.245 [2024-07-22 18:10:18.268173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.245 [2024-07-22 18:10:18.268418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.268447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.268676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.268955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.268981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.269215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.269457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.269486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.269697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.269927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.269954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.270313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.270518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.270546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.270896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.271101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.271128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.271371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.271615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.271645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.271999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.272386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.272416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.272671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.272907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.272933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.273272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.273473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.273500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.273852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.274149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.274175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.274545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.274861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.274887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.275259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.275610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.275639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.276045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.276272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.276298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.276456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.276802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.276830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.277074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.277304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.277330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.277667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.277860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.277886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.278127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.278385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.278415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.278772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.278974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.279000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.279370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.279722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.279749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.280099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.280331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.280384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.280725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.280927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.280953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.281363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.281616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.281643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.281993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.282235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.282262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.282497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.282823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.282856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.283211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.283619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.283648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.284003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.284340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.284377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.284624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.284872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.284898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.285257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.285683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.285711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.285919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.286304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.286331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.286548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.286892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.286920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.287140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.287488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.287516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.287738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.287991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.288021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.288384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.288728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.288756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.289061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.289233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.289259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.289613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.289821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.289847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.290215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.290585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.290614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.290853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.291204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.291230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.291579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.291785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.291812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.292069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.292420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.292448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.292811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.293188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.293215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.293419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.293805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.293832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.294009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.294265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.294293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.294660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.294993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.295021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.295387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.295653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.295680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.295920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.296283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.296310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.246 qpair failed and we were unable to recover it. 00:33:14.246 [2024-07-22 18:10:18.296684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.246 [2024-07-22 18:10:18.297055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.297081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.297439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.297548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.297578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.297912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.298227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.298254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.298513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.298821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.298847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.299038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.299245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.299271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.299660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.300053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.300079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.300290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.300602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.300629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.300717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.300947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.300973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.301338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.301698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.301725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.302142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.302376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.302404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.302622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.302857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.302884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.303246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.303636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.303665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.304019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.304373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.304404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.304786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.305002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.305028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.305407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.305739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.305767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.306139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.306469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.306497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.306863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.307205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.307232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.307579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.307817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.307844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.308186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.308531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.308560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.308909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.309274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.309301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.309702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.310075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.310102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.310458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.310711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.310737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.310973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.311322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.311359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.311711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.312035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.312063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.312450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.312830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.312857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.313199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.313553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.313580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.313934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.314135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.314161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.314496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.314846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.314873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.315215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.315438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.315465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.315828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.316163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.316200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.316558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.316929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.316956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.317164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.317508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.317536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.317902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.318123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.318149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.318392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.318709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.318736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.319106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.319471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.319501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.319870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.320223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.320250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.320457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.320597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.320623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.320958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.321323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.321378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.321751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.322075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.322109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.322514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.322751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.322784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.247 qpair failed and we were unable to recover it. 00:33:14.247 [2024-07-22 18:10:18.323040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.323255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.247 [2024-07-22 18:10:18.323282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.323504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.323881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.323908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.324172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.324551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.324579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.324981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.325322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.325358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.325771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.326091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.326119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.326486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.326835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.326862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.327223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.327572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.327601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.327999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.328230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.328260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.328521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.328880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.328907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.329126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.329495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.329523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.329835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.330183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.330209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.330560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.330909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.330936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.331346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.331708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.331735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.332103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.332450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.332479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.332689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.333032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.333059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.333275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.333641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.333669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.334018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.334400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.334428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.334880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.335234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.335260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.335687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.335923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.335950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.336319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.336688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.336716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.336979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.337191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.337219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.337456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.337820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.337848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.338095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.338316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.338342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.338608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.338940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.338967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.339359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.339731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.339758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.340051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.340422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.340451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.340823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.341075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.341100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.341300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.341664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.341693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.342051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.342404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.342431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.342673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.343017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.343043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.343407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.343807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.343836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.344156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.344394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.344423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.344780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.345133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.345161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.345518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.345736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.345763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.346153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.346532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.346561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.346923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.347267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.347294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.347731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.348082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.348108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.348287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.348564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.348593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.348957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.349336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.349375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.349618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.349983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.350010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.350370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.350593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.350620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.350985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.351319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.351346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.351560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.351903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.351931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.352284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.352516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.352545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.352763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.353128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.353155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.353344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.353666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.353692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.248 qpair failed and we were unable to recover it. 00:33:14.248 [2024-07-22 18:10:18.354037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.354395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.248 [2024-07-22 18:10:18.354424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.354767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.355109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.355135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.355539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.355899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.355927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.356166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.356400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.356428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.356765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.357106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.357139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.357483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.357831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.357858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.358107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.358446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.358474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.358689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.358917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.358943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.359160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.359535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.359562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.359965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.360207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.360234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.360563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.360909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.360935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.361137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.361477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.361505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.361770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.362118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.362144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.362405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.362629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.362660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.362936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.363260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.363286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.363516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.363878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.363905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.364269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.364620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.364650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.364983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.365185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.365211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.365478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.365876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.365904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.366125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.366468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.366496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.366738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.367095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.367122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.367425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.367663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.367693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.368058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.368396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.368426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.368800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.369013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.369039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.369408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.369769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.369797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.370038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.370220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.370246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.370488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.370712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.370739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.370989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.371306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.371333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.371720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.371957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.371986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.372234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.372644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.372673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.373017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.373238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.373265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.373440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.373831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.373856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.374081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.374432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.374460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.374691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.374910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.374937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.375291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.375636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.375664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.376018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.376380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.376412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.376816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.377169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.377197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.377528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.377937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.377964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.378340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.378706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.378733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.379126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.379455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.379483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.379699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.380046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.380073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.380285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.380626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.380656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.381016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.381372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.381402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.381773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.381961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.381987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.382342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.382715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.382743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.383052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.383405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.383433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.249 qpair failed and we were unable to recover it. 00:33:14.249 [2024-07-22 18:10:18.383651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.249 [2024-07-22 18:10:18.383825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.383851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.384240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.384469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.384497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.384864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.385209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.385236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.385459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.385674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.385700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.386085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.386309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.386338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.386709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.386933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.386962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.387160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.387399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.387428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.387812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.388252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.388279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.388534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.388942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.388969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.389335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.389688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.389722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.390077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.390419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.390447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.390717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.390937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.390964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.391357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.391603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.391629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.391807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.392162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.392189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.392405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.392786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.392812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.393128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.393373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.393404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.393769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.394131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.394158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.394486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.394702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.394728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.394972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.395304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.395330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.395572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.395932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.395959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.396334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.396727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.396755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.397116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.397444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.397472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.397658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.397990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.398016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.398380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.398786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.398813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.399040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.399401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.399428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.399663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.400002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.400028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.400245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.400617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.400645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.400893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.401235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.401261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.401472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.401839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.401865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.402215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.402566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.402594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.402941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.403139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.403165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.403510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.403848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.403875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.404041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.404407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.404435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.404772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.405148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.405174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.405384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.405638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.405665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.406031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.406376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.406405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.406751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.407113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.407140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.407519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.407896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.407923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.408300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.408666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.408695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.408926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.409275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.409302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.409711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.410063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.250 [2024-07-22 18:10:18.410090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.250 qpair failed and we were unable to recover it. 00:33:14.250 [2024-07-22 18:10:18.410508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.410847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.410874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.411121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.411363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.411395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.411808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.412040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.412066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.412296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.412662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.412692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.413048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.413254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.413280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.413556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.413916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.413944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.414186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.414392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.414420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.414801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.415017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.415044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.415286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.415611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.415641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.416001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.416371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.416400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.416755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.417104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.417131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.417343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.417706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.417733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.418077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.418314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.418341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.418689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.419110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.419136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.419468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.419743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.419769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.420031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.420390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.420419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.420616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.420707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.420732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.420986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.421329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.421382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.421734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.421971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.421997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.422334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.422748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.422782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.423175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.423530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.423559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.423927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.424032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.424057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.424445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.424801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.424828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.425152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.425472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.425500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.425856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.426158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.426184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.426577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.426920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.426947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.427311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.427698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.427725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.428107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.428460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.428489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.428832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.429181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.429207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.429372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.429720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.429753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.430070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.430441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.430469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.430687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.431050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.431077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.431429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.431800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.431827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.432149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.432481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.432509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.432854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.433109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.433136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.433490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.433838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.433866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.434223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.434567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.434595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.434981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.435283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.435310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.435709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.436066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.436094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.436415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.436760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.436788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.437139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.437474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.437503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.437708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.437932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.437961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.438334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.438698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.438727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.439051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.439381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.439410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.439668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.440023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.440050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.440266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.440639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.440669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.251 qpair failed and we were unable to recover it. 00:33:14.251 [2024-07-22 18:10:18.440921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.251 [2024-07-22 18:10:18.441155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.441182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.441311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.441679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.441707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.441946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.442188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.442215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.442457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.442814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.442841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.443027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.443261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.443289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.443675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.443890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.443917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.444252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.444331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.444370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.444810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.445138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.445186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.445534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.445878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.445904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.446245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.446499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.446527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.446783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.447012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.447039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.447396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.447757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.447785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.448014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.448226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.448252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.448538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.448860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.448894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.449251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.449503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.449532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.449918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.450258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.450285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.450645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.450991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.451018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.451254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.451584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.451611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.451829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.452187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.452214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.452570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.452788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.452815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.453177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.453520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.453550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.453794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.454156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.454183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.454532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.454754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.454781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.455010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.455367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.455395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.455738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.456078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.456110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.456445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.456663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.456690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.456896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.457134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.457162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.457520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.457889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.457916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.458256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.458626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.458654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.458958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.459191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.459220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.459321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.459706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.459735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.460101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.460443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.460473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.460844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.461195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.461223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.461582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.461934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.461962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.462323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.462433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.462467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.462712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.462877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.462905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.463128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.463370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.463400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.463771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.464149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.464176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.464543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.464953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.464980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.465224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.465571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.465600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.465936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.466207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.466234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.466575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.466896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.466922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.467267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.467512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.467540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.467781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.468119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.468147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.468498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.468722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.468750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.252 qpair failed and we were unable to recover it. 00:33:14.252 [2024-07-22 18:10:18.469086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.469281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.252 [2024-07-22 18:10:18.469309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.469523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.469850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.469877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.470224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.470454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.470483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.470705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.471035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.471062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.471417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.471673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.471700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.472048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.472386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.472416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.472777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.473152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.473180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.473542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.473921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.473950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.474307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.474668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.474697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.475062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.475262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.475289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.475666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.475900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.475930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.476245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.476478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.476507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.476884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.477058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.477084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.477445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.477809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.477836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.478192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.478367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.478395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.478645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.478858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.478884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.479140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.479478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.479506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.479863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.480212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.480239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.480583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.480851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.480878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.481089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.481299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.481325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.481702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.482055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.482082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.482370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.482711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.482739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.483082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.483443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.483472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.483819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.484002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.484028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.484394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.484749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.484776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.484982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.485381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.485410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.485679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.486032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.486059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.486417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.486752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.486779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.487118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.487321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.487356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.487687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.487921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.487948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.488310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.488560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.488589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.488956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.489192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.489218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.489577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.489923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.489950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.490283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.490595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.490623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.490971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.491316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.491343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.491700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.492071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.492098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.492491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.492813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.492840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.493167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.493515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.493543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.493920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.494277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.494304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.494587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.494927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.494954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.495297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.495511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.495545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.495751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.496129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.496156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.496523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.496782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.496808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.496962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.497312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.497339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.253 qpair failed and we were unable to recover it. 00:33:14.253 [2024-07-22 18:10:18.497726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.253 [2024-07-22 18:10:18.498103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.498131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.498329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.498663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.498690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.499044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.499247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.499273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.499507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.499916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.499943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.500272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.500645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.500674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.500913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.501150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.501178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.501533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.501891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.501918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.502139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.502582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.502610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.502859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.503252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.503279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.503623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.503989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.504018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.504378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.504765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.504793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.505040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.505272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.505298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.505705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.506048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.506075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.506448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.506697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.506723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.507118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.507329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.507368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.507754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.508124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.508151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.508526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.508806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.508832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.509055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.509302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.509330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.509750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.509976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.510004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.510208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.510591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.254 [2024-07-22 18:10:18.510621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.254 qpair failed and we were unable to recover it. 00:33:14.254 [2024-07-22 18:10:18.510974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.511336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.511379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.511752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.512140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.512168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.512468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.512793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.512820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.513212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.513479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.513508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.513876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.513999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.514025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.514383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.514714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.514743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.515070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.515314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.515341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.515609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.515851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.515877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.516129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.516372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.516403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.516728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.517056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.517084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.517437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.517657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.523 [2024-07-22 18:10:18.517685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.523 qpair failed and we were unable to recover it. 00:33:14.523 [2024-07-22 18:10:18.518132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.518448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.518477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.518894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.519134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.519162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.519531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.519946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.519973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.520366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.520564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.520591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.520948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.521038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.521063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.521310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.521677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.521707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.522110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.522459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.522489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.522901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.523264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.523291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.523654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.523877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.523906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.524038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.524346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.524386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.524633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.524982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.525009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.525224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.525559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.525587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.525952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.526319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.526347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.526773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.527155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.527182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.527414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.527651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.527682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.527926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.528156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.528186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.528654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.529002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.529035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.529428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.529703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.529730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.529859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.530231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.530258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.530475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.530669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.530696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.531008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.531241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.531268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.531527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.531758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.531785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.532047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.532398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.532426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.532856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.533122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.533148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.533429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.533644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.533670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.533839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.534204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.534231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.534451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.534827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.534855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.535210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.535443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.535471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.524 qpair failed and we were unable to recover it. 00:33:14.524 [2024-07-22 18:10:18.535693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.535921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.524 [2024-07-22 18:10:18.535948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.536274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.536499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.536528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.536628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.536842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.536869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.537255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.537509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.537537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.537894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.538230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.538256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.538643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.538870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.538897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.539247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.539667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.539695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.540064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.540486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.540514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.540884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.540971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.540996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.541371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.541586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.541612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.541860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.542197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.542224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.542591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.542833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.542860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.543206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.543561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.543590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.543887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.544138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.544164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.544404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.544749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.544775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.545019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.545343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.545384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.545771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.546131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.546160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.546486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.546713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.546741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.547109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.547343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.547385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.547771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.548106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.548133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.548544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.548892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.548920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.549298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.549637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.549664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.549889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.550101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.550127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.550433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.550821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.550848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.551091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.551440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.551470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.551847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.552191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.552219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.552617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.552959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.552986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.553340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.553727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.553755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.554111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.554448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.554476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.525 qpair failed and we were unable to recover it. 00:33:14.525 [2024-07-22 18:10:18.554817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.525 [2024-07-22 18:10:18.555159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.555187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.555393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.555620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.555648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.556003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.556308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.556335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.556556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.556909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.556936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.557211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.557620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.557649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.557894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.557996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.558024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.558317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.558571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.558599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.558847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.559171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.559197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.559411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.559822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.559849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.560198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.560470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.560498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.560899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.561223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.561256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.561612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.561822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.561850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.562110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.562374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.562402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.562755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.562977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.563004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.563381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.563710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.563738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.564107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.564332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.564369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.564629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.565016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.565044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.565386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.565712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.565740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.565835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.566161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.566188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.566422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.566686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.566714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.566965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.567066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.567102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.567345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.567733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.567762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.567967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.568298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.568325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.568606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.568960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.568987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.569366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.569783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.569810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.526 qpair failed and we were unable to recover it. 00:33:14.526 [2024-07-22 18:10:18.570178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.526 [2024-07-22 18:10:18.570429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.570457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.570845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.570932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.570958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.571207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.571444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.571473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.571697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.572036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.572064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.572421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.572652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.572679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.572933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.573320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.573357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.573608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.573947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.573974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.574306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.574545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.574574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.574951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.575284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.575311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.575585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.575934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.575961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.576203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.576436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.576466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.576704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.577052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.577081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.577322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.577491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.577520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.577895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.578238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.578269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.578622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.579013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.579042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.579292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.579613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.579642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.579975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.580371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.580400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.580718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.580816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.580843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.581065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.581437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.527 [2024-07-22 18:10:18.581466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.527 qpair failed and we were unable to recover it. 00:33:14.527 [2024-07-22 18:10:18.581833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.582190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.582218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.582579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.582916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.582943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.583320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.583765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.583792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.584038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.584369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.584397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.584635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.584992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.585019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.585457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.585703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.585731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.586103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.586343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.586382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.586653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.587086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.587114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.587362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.587772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.587799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.588159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.588508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.588537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.588890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.589105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.589132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.589371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.589612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.589639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.589982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.590238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.590266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.590633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.590983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.591010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.591257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.591512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.591541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.591871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.592094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.592122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.528 qpair failed and we were unable to recover it. 00:33:14.528 [2024-07-22 18:10:18.592435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.592648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.528 [2024-07-22 18:10:18.592676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.593072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.593514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.593542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.593759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.594113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.594141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.594498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.594745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.594772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.595010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.595382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.595410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.595753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.595844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.595869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.596201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.596414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.596442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.596721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.596939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.596967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.597067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.597378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.597406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.597746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.598098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.598126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.598473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.598824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.598852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.599211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.599575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.599609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.599942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.600209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.600239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.600442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.600709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.600736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.601080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.601290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.601317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.601670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.602022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.602050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.529 [2024-07-22 18:10:18.602282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.602488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.529 [2024-07-22 18:10:18.602518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.529 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.602859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.603245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.603272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.603488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.603714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.603740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.604101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.604306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.604334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.604738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.605063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.605090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.605473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.605859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.605887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.606293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.606660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.606689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.607050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.607247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.607275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.607506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.607609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.607637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.608031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.608242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.608270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.608593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.608941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.608968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.609214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.609454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.609484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.609863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.610222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.610250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.610492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.610865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.610894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.611258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.611513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.611542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.611907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.612147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.612176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.612561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.612903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.612931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.613228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.613659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.613688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.614143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.614475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.614506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.530 qpair failed and we were unable to recover it. 00:33:14.530 [2024-07-22 18:10:18.614918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.615134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.530 [2024-07-22 18:10:18.615164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.615534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.615884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.615912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.616128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.616489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.616517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.616880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.617220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.617247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.617607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.617880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.617907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.618024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.618272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.618300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.618706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.618937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.618965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.619202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.619419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.619448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.619688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.619920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.619948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.620294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.620594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.620623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.620985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.621221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.621250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.621629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.621983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.622011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.622377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.622719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.622749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.623038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.623250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.623277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.623669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.623900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.623926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.624300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.624554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.624583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.624933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.625131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.625159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.625253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.625624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.625653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.625980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.626215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.626242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.531 qpair failed and we were unable to recover it. 00:33:14.531 [2024-07-22 18:10:18.626678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.531 [2024-07-22 18:10:18.626990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.627017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.627251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.627588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.627616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.627840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.628176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.628202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.628564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.628748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.628774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.629176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.629402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.629432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.629669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.630062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.630089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.630492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.630716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.630742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.631152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.631397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.631426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.631788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.632113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.632146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.632390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.632618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.632645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.632908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.633144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.633171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.633575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.633795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.633822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.634052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.634267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.634293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.634655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.635003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.635030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.635376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.635589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.635615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.635990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.636315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.636342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.636721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.637050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.637077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.637429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.637655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.637682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.638061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.638422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.638451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.532 qpair failed and we were unable to recover it. 00:33:14.532 [2024-07-22 18:10:18.638583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.638977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.532 [2024-07-22 18:10:18.639003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.639379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.639698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.639724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.640089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.640288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.640315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.640699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.641001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.641028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.641390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.641784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.641811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.642052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.642291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.642319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.642446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.642683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.642713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.643010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.643373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.643402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.643799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.644097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.644123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.644500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.644870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.644896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.645250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.645635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.645663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.646032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.646376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.646405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.646769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.646980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.647006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.647102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.647218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.647244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.647521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.647908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.647935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.648270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.648671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.648700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.649018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.649392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.649420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.649629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.649934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.649961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.650203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.650578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.650606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.650810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.651161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.533 [2024-07-22 18:10:18.651187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.533 qpair failed and we were unable to recover it. 00:33:14.533 [2024-07-22 18:10:18.651543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.651914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.651943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.652306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.652673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.652703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 18:10:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:14.534 [2024-07-22 18:10:18.653080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 18:10:18 -- common/autotest_common.sh@852 -- # return 0 00:33:14.534 [2024-07-22 18:10:18.653416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.653448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 18:10:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 18:10:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:14.534 [2024-07-22 18:10:18.653820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.534 [2024-07-22 18:10:18.654122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.654151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.654479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.654817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.654844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.655083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.655468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.655497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.655734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.655966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.655995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.656370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.656569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.656598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.656946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.657260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.657287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.657698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.658086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.658120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.658370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.658751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.658780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.659078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.659484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.659513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.659746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.660143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.660172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.660511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.660711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.660741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.661081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.661310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.661338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.661582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.661931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.661959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.662187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.662537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.662566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.662936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.663240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.663266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.534 [2024-07-22 18:10:18.663658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.664042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.534 [2024-07-22 18:10:18.664069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.534 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.664417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.664800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.664829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.665196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.665422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.665451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.665837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.666197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.666224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.666496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.666711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.666737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.666942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.667250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.667278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.667618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.667975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.668004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.668386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.668766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.668795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.669145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.669447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.669476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.669697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.670075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.670103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.670455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.670721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.670752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.671091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.671418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.671447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.671795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.672108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.672136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.672368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.672782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.672813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.673154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.673372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.673400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.673737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.674097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.674123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.674476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.674846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.674874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.675250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.675568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.675596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.675837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.676187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.676215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.676444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.676698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.676726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.676980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.677288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.677316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.535 qpair failed and we were unable to recover it. 00:33:14.535 [2024-07-22 18:10:18.677697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.535 [2024-07-22 18:10:18.677895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.677921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.678286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.678691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.678722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.679079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.679455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.679484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.679852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.680054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.680081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.680303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.680517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.680546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.680913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.681302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.681330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.681706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.682066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.682092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.682454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.682690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.682716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.682816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.683014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.683041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.683164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.683385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.683414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.683782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.684119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.684148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.684511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.684871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.684899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.685257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.685656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.685684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.686038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.686404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.686433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.686731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.687121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.687148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.687390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.687727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.687758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.688118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.688313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.688344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.688756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.689123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.689151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.689366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.689679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.689706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.689955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.690381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.690412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.690609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.690951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.690980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.691240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.691637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.691673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.691913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.692149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.692176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.692420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.692801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 18:10:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.536 [2024-07-22 18:10:18.692830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 18:10:18 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.536 [2024-07-22 18:10:18.693253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.536 [2024-07-22 18:10:18.693598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.693627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.536 [2024-07-22 18:10:18.694002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.694222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.694248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.694681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.695027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.695054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.536 [2024-07-22 18:10:18.695418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.695808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.536 [2024-07-22 18:10:18.695835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.536 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.696211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.696614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.696642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.696999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.697285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.697311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.697713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.697998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.698024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.698383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.698630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.698660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.698908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.699111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.699137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.699423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.699778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.699805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.700149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.700507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.700535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.700869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.701188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.701215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.701595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.701944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.701971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.702380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.702500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.702528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.702873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.703272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.703299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.703674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.704094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.704121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.704495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.704850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.704876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.705237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.705464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.705493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.705861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.706065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.706092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.706342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.706558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.706585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.706927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.707288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.707315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.707685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.708023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.708050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.708383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.708766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.708792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.709020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.709400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.709429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.709742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.710089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.710116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.710456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.710798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.710825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.711176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.711519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.711547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.711814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.712206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.712233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.712592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.712952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.712979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.713337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.713743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.713771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.713993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.714317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.714343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.714735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.715094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.715120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.537 qpair failed and we were unable to recover it. 00:33:14.537 [2024-07-22 18:10:18.715498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.537 [2024-07-22 18:10:18.715725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.715753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.716113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.716458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.716486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.716878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.717087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.717114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.717524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.717896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.717923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.718144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.718469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.718497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.718842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.719219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.719252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.719502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.719857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.719884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.720008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.720266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.720293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.720697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.720963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.720990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.721365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.721746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.721773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.722008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.722414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.722443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.722803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.723213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.723240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 Malloc0 00:33:14.538 [2024-07-22 18:10:18.723529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.723874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.723901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.724265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.538 [2024-07-22 18:10:18.724541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.724569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 18:10:18 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:14.538 [2024-07-22 18:10:18.724820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.725037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.725063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.538 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.538 [2024-07-22 18:10:18.725434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.725575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.725605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.726006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.726371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.726400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.726676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.727020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.727047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.727420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.727826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.727852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.728272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.728642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.728670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.728922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.729124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.729152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.729554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.729908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.729935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.730280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.730505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.730534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.730883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.730953] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.538 [2024-07-22 18:10:18.731207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.731235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.731489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.731662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.731688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.732088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.732449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.732478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.732849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.733207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.733233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.538 [2024-07-22 18:10:18.733495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.733699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.538 [2024-07-22 18:10:18.733726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.538 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.734090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.734446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.734474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.734852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.735082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.735110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.735460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.735692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.735719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.735971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.736310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.736337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.736740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.736972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.736998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.737214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.737476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.737503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.737774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.737978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.738004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.738389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.738755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.738782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.739026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.739380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.739409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.739796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.740047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.740073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.740422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.539 [2024-07-22 18:10:18.740772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.740799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 18:10:18 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.539 [2024-07-22 18:10:18.741155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.539 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.539 [2024-07-22 18:10:18.741510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.741539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.741973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.742321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.742372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.742595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.742850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.742878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.743131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.743497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.743526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.743960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.744282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.744309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.744720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.745067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.745101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.745469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.745679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.745713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.745992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.746395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.746424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.746787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.747118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.747145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.747377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.747716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.747744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.748110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.748439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.748468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.748833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.749069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.749097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.749442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.749823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.749851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.750138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.750260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.750288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.750517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.750871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.750898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.751246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.751446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.751475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.539 qpair failed and we were unable to recover it. 00:33:14.539 [2024-07-22 18:10:18.751679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.751781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.539 [2024-07-22 18:10:18.751807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.752023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.540 [2024-07-22 18:10:18.752447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.752476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 18:10:18 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.540 [2024-07-22 18:10:18.752852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.540 [2024-07-22 18:10:18.753113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.753140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.540 [2024-07-22 18:10:18.753475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.753844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.753872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.754132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.754487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.754515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.754796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.755015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.755042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.755432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.755654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.755681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.756070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.756364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.756391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.756605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.756838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.756865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.757219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.757637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.757664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.757890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.758264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.758290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.758536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.758881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.758907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.759272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.759474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.759504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.759853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.760205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.760232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.760628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.760990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.761017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.761375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.761672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.761708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.762067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.762277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.762303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.762702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.763044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.763071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.763444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.763836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.763864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.764223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.540 [2024-07-22 18:10:18.764578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.764606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 18:10:18 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.540 [2024-07-22 18:10:18.764955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.540 [2024-07-22 18:10:18.765307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.765335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.540 [2024-07-22 18:10:18.765723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.766097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.540 [2024-07-22 18:10:18.766125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.540 qpair failed and we were unable to recover it. 00:33:14.540 [2024-07-22 18:10:18.766418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.766787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.766814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.767033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.767430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.767458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.767692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.768051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.768079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.768309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.768534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.768566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.768840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.769197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.769224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.769476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.769852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.769879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.770257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.770490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.770517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.770770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.771004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.771030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.771283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.771503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.771531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12dc8b0 with addr=10.0.0.2, port=4420 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.771904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:14.541 [2024-07-22 18:10:18.772094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.541 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.541 18:10:18 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:14.541 18:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:14.541 18:10:18 -- common/autotest_common.sh@10 -- # set +x 00:33:14.541 [2024-07-22 18:10:18.778718] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:33:14.541 [2024-07-22 18:10:18.778833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12dc8b0 (107): Transport endpoint is not connected 00:33:14.541 [2024-07-22 18:10:18.778941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 [2024-07-22 18:10:18.782022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.541 [2024-07-22 18:10:18.782181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.541 [2024-07-22 18:10:18.782236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.541 [2024-07-22 18:10:18.782259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.541 [2024-07-22 18:10:18.782279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.541 [2024-07-22 18:10:18.782325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.541 qpair failed and we were unable to recover it. 00:33:14.541 18:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:14.541 18:10:18 -- host/target_disconnect.sh@58 -- # wait 1889878 00:33:14.804 [2024-07-22 18:10:18.791835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.791959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.792000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.792018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.792033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.792068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.802172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.802356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.802394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.802407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.802417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.802441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.811809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.811918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.811944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.811954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.811960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.811979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.821866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.821949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.821973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.821980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.821986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.822002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.831846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.831927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.831952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.831960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.831965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.831982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.842153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.842276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.842302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.842309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.842315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.842336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.851921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.852021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.852047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.852054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.852060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.852077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.861960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.862037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.804 [2024-07-22 18:10:18.862062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.804 [2024-07-22 18:10:18.862069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.804 [2024-07-22 18:10:18.862074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.804 [2024-07-22 18:10:18.862092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.804 qpair failed and we were unable to recover it. 00:33:14.804 [2024-07-22 18:10:18.871954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.804 [2024-07-22 18:10:18.872040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.872064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.872071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.872077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.872095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.882294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.882469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.882494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.882501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.882507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.882524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.892039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.892130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.892159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.892166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.892172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.892189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.901955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.902041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.902069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.902076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.902082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.902101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.912081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.912163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.912188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.912195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.912202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.912220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.922402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.922513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.922540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.922548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.922554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.922572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.932177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.932266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.932290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.932297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.932303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.932329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.942231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.942306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.942330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.942338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.942344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.942368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.952194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.952281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.952309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.952317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.952323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.952340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.962521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.962635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.962661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.962668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.962673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.962690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.972303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.972417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.972442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.972449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.972455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.972473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.982337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.982422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.982456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.982464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.982470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.982487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:18.992378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:18.992459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.805 [2024-07-22 18:10:18.992484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.805 [2024-07-22 18:10:18.992491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.805 [2024-07-22 18:10:18.992497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.805 [2024-07-22 18:10:18.992513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.805 qpair failed and we were unable to recover it. 00:33:14.805 [2024-07-22 18:10:19.002572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.805 [2024-07-22 18:10:19.002687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.002711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.002718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.002724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.002741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.012493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.012583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.012607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.012614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.012620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.012637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.022380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.022464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.022488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.022495] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.022506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.022523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.032544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.032629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.032654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.032661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.032667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.032683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.042834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.042943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.042966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.042973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.042979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.042996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.052689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.052783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.052807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.052814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.052820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.052836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.062644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.062726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.062750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.062757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.062763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.062780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:14.806 [2024-07-22 18:10:19.072747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:14.806 [2024-07-22 18:10:19.072848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:14.806 [2024-07-22 18:10:19.072872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:14.806 [2024-07-22 18:10:19.072879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:14.806 [2024-07-22 18:10:19.072885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:14.806 [2024-07-22 18:10:19.072901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:14.806 qpair failed and we were unable to recover it. 00:33:15.068 [2024-07-22 18:10:19.082988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.068 [2024-07-22 18:10:19.083103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.068 [2024-07-22 18:10:19.083127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.068 [2024-07-22 18:10:19.083134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.068 [2024-07-22 18:10:19.083140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.068 [2024-07-22 18:10:19.083156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.068 qpair failed and we were unable to recover it. 00:33:15.068 [2024-07-22 18:10:19.092802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.068 [2024-07-22 18:10:19.092942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.068 [2024-07-22 18:10:19.092967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.068 [2024-07-22 18:10:19.092974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.068 [2024-07-22 18:10:19.092980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.068 [2024-07-22 18:10:19.092997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.068 qpair failed and we were unable to recover it. 00:33:15.068 [2024-07-22 18:10:19.102816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.068 [2024-07-22 18:10:19.102897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.068 [2024-07-22 18:10:19.102921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.068 [2024-07-22 18:10:19.102928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.068 [2024-07-22 18:10:19.102935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.068 [2024-07-22 18:10:19.102953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.068 qpair failed and we were unable to recover it. 00:33:15.068 [2024-07-22 18:10:19.113090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.068 [2024-07-22 18:10:19.113196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.068 [2024-07-22 18:10:19.113220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.068 [2024-07-22 18:10:19.113227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.068 [2024-07-22 18:10:19.113239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.068 [2024-07-22 18:10:19.113256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.068 qpair failed and we were unable to recover it. 00:33:15.068 [2024-07-22 18:10:19.123255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.068 [2024-07-22 18:10:19.123376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.068 [2024-07-22 18:10:19.123402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.123410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.123416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.123433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.132990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.133087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.133111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.133118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.133125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.133142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.142999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.143081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.143104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.143111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.143118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.143135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.152946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.153027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.153052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.153059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.153065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.153081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.163274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.163410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.163435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.163442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.163448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.163465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.172931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.173026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.173050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.173057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.173063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.173080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.182938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.183020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.183045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.183052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.183058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.183074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.193120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.193208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.193233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.193241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.193248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.193265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.203449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.203561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.203586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.203592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.203605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.203622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.213090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.213197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.213221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.213228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.213235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.213251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.223114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.223199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.223227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.223235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.223241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.223258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.233238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.233324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.233358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.233366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.233372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.233389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.243462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.243581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.243606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.243613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.243619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.069 [2024-07-22 18:10:19.243635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.069 qpair failed and we were unable to recover it. 00:33:15.069 [2024-07-22 18:10:19.253368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.069 [2024-07-22 18:10:19.253480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.069 [2024-07-22 18:10:19.253505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.069 [2024-07-22 18:10:19.253512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.069 [2024-07-22 18:10:19.253519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.253535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.263347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.263432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.263459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.263467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.263473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.263490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.273404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.273483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.273508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.273515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.273521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.273538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.283695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.283819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.283843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.283850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.283856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.283872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.293474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.293568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.293591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.293598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.293610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.293627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.303470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.303551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.303575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.303582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.303588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.303605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.313500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.313585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.313609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.313616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.313623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.313639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.323828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.323963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.323986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.323993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.323999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.324015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.070 [2024-07-22 18:10:19.333611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.070 [2024-07-22 18:10:19.333750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.070 [2024-07-22 18:10:19.333774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.070 [2024-07-22 18:10:19.333782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.070 [2024-07-22 18:10:19.333788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.070 [2024-07-22 18:10:19.333804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.070 qpair failed and we were unable to recover it. 00:33:15.332 [2024-07-22 18:10:19.343481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.332 [2024-07-22 18:10:19.343555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.332 [2024-07-22 18:10:19.343580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.332 [2024-07-22 18:10:19.343587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.332 [2024-07-22 18:10:19.343593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.332 [2024-07-22 18:10:19.343610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.332 qpair failed and we were unable to recover it. 00:33:15.332 [2024-07-22 18:10:19.353638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.332 [2024-07-22 18:10:19.353760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.332 [2024-07-22 18:10:19.353783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.332 [2024-07-22 18:10:19.353790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.332 [2024-07-22 18:10:19.353796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.332 [2024-07-22 18:10:19.353813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.332 qpair failed and we were unable to recover it. 00:33:15.332 [2024-07-22 18:10:19.363955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.332 [2024-07-22 18:10:19.364061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.332 [2024-07-22 18:10:19.364086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.332 [2024-07-22 18:10:19.364093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.332 [2024-07-22 18:10:19.364099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.332 [2024-07-22 18:10:19.364115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.332 qpair failed and we were unable to recover it. 00:33:15.332 [2024-07-22 18:10:19.373648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.373739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.373763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.373770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.373776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.373793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.383749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.383826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.383850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.383863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.383870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.383886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.393801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.393892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.393916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.393924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.393930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.393946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.404120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.404244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.404269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.404276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.404282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.404298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.413947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.414073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.414109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.414117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.414124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.414145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.423881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.423959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.423986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.423993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.424000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.424018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.433922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.434031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.434056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.434064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.434070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.434087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.444259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.444389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.444414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.444421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.444427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.444445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.453888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.453984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.454010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.454018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.454023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.454041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.464040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.464137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.464162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.464169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.464175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.464192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.474036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.474128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.474164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.474180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.474186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.474210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.484363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.484486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.484514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.484522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.484528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.484547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.494135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.494239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.494264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.333 [2024-07-22 18:10:19.494271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.333 [2024-07-22 18:10:19.494277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.333 [2024-07-22 18:10:19.494295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.333 qpair failed and we were unable to recover it. 00:33:15.333 [2024-07-22 18:10:19.504169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.333 [2024-07-22 18:10:19.504252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.333 [2024-07-22 18:10:19.504277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.504284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.504290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.504309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.514201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.514285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.514310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.514317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.514323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.514340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.524506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.524636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.524661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.524669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.524674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.524692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.534291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.534383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.534408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.534416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.534422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.534439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.544281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.544368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.544393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.544401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.544407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.544424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.554327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.554413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.554438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.554446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.554452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.554468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.564648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.564761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.564785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.564799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.564805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.564822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.574431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.574529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.574559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.574567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.574573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.574592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.584357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.584432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.584457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.584464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.584470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.584488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.594424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.594501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.594526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.594533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.594539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.594556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.334 [2024-07-22 18:10:19.604786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.334 [2024-07-22 18:10:19.604902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.334 [2024-07-22 18:10:19.604926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.334 [2024-07-22 18:10:19.604933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.334 [2024-07-22 18:10:19.604938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.334 [2024-07-22 18:10:19.604957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.334 qpair failed and we were unable to recover it. 00:33:15.596 [2024-07-22 18:10:19.614539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.596 [2024-07-22 18:10:19.614636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.596 [2024-07-22 18:10:19.614660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.596 [2024-07-22 18:10:19.614667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.596 [2024-07-22 18:10:19.614672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.596 [2024-07-22 18:10:19.614689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.596 qpair failed and we were unable to recover it. 00:33:15.596 [2024-07-22 18:10:19.624525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.596 [2024-07-22 18:10:19.624597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.596 [2024-07-22 18:10:19.624621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.596 [2024-07-22 18:10:19.624629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.596 [2024-07-22 18:10:19.624634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.596 [2024-07-22 18:10:19.624651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.596 qpair failed and we were unable to recover it. 00:33:15.596 [2024-07-22 18:10:19.634603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.596 [2024-07-22 18:10:19.634729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.634753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.634761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.634767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.634783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.644932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.645039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.645063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.645072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.645078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.645095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.654699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.654791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.654816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.654829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.654835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.654852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.664721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.664797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.664822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.664829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.664835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.664851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.674738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.674812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.674837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.674844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.674850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.674867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.685073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.685186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.685211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.685218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.685226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.685243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.694805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.694912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.694937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.694945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.694951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.694968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.704705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.704787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.704812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.704819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.704825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.704842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.714864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.714940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.714965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.714972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.714978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.714996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.725154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.725265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.725289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.725297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.725303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.725320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.734987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.735119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.735144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.735152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.735158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.735175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.744918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.744995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.745025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.745033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.745039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.597 [2024-07-22 18:10:19.745055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.597 qpair failed and we were unable to recover it. 00:33:15.597 [2024-07-22 18:10:19.754949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.597 [2024-07-22 18:10:19.755029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.597 [2024-07-22 18:10:19.755055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.597 [2024-07-22 18:10:19.755062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.597 [2024-07-22 18:10:19.755068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.755085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.765177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.765293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.765317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.765325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.765331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.765360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.775089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.775187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.775210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.775218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.775224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.775241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.785121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.785197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.785221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.785229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.785235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.785251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.795025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.795118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.795144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.795152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.795158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.795174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.805488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.805603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.805627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.805635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.805641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.805659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.815236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.815341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.815373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.815380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.815387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.815403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.825257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.825331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.825366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.825373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.825379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.825396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.835323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.835413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.835444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.835452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.835458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.835476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.845580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.845695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.845721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.845728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.845734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.845750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.855368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.855465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.855489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.855497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.855502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.855519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.598 [2024-07-22 18:10:19.865395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.598 [2024-07-22 18:10:19.865478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.598 [2024-07-22 18:10:19.865503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.598 [2024-07-22 18:10:19.865510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.598 [2024-07-22 18:10:19.865516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.598 [2024-07-22 18:10:19.865533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.598 qpair failed and we were unable to recover it. 00:33:15.861 [2024-07-22 18:10:19.875417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.861 [2024-07-22 18:10:19.875503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.861 [2024-07-22 18:10:19.875527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.861 [2024-07-22 18:10:19.875535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.861 [2024-07-22 18:10:19.875540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.861 [2024-07-22 18:10:19.875565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.861 qpair failed and we were unable to recover it. 00:33:15.861 [2024-07-22 18:10:19.885717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.861 [2024-07-22 18:10:19.885828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.861 [2024-07-22 18:10:19.885854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.861 [2024-07-22 18:10:19.885861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.861 [2024-07-22 18:10:19.885867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.861 [2024-07-22 18:10:19.885884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.861 qpair failed and we were unable to recover it. 00:33:15.861 [2024-07-22 18:10:19.895452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.861 [2024-07-22 18:10:19.895590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.861 [2024-07-22 18:10:19.895614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.861 [2024-07-22 18:10:19.895622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.861 [2024-07-22 18:10:19.895628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.895645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.905508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.905612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.905637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.905645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.905651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.905668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.915583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.915656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.915681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.915688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.915694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.915710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.925821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.925935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.925966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.925974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.925980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.925997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.935577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.935668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.935694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.935702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.935708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.935726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.945495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.945589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.945616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.945623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.945629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.945647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.955647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.955757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.955784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.955792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.955797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.955814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.966006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.966142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.966167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.966174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.966180] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.966203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.975755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.975849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.975874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.975882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.975887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.975904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.985706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.985783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.985809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.985816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.985822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.985838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:19.995756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:19.995833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:19.995858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:19.995866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:19.995872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:19.995888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:20.006094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:20.006236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:20.006262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:20.006270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:20.006276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:20.006293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:20.015851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:20.015948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:20.015979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:20.015986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:20.015992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:20.016009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:20.025792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.862 [2024-07-22 18:10:20.025876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.862 [2024-07-22 18:10:20.025902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.862 [2024-07-22 18:10:20.025910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.862 [2024-07-22 18:10:20.025916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.862 [2024-07-22 18:10:20.025934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.862 qpair failed and we were unable to recover it. 00:33:15.862 [2024-07-22 18:10:20.035932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.036009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.036034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.036043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.036050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.036067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.046251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.046382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.046422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.046433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.046441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.046464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.055988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.056085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.056114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.056122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.056128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.056153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.066006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.066092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.066119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.066127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.066134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.066151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.076085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.076162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.076188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.076195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.076202] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.076220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.086375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.086491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.086518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.086525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.086532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.086549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.096032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.096127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.096152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.096159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.096166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.096184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.106172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.106267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.106299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.106307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.106313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.106330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.116186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.116271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.116297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.116305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.116311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.116328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:15.863 [2024-07-22 18:10:20.126385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:15.863 [2024-07-22 18:10:20.126514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:15.863 [2024-07-22 18:10:20.126540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:15.863 [2024-07-22 18:10:20.126548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:15.863 [2024-07-22 18:10:20.126554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:15.863 [2024-07-22 18:10:20.126571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:15.863 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.136281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.136384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.136409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.136416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.136423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.136441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.146204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.146276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.146300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.146307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.146313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.146335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.156234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.156323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.156354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.156362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.156369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.156387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.166568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.166683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.166708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.166715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.166721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.166738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.176417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.176548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.176572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.176580] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.176586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.176603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.186488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.186577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.186601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.186608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.186614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.186631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.196461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.196546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.196577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.196585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.196592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.196610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.206821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.206979] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.207005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.207012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.207019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.207037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.216585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.216678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.216704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.216712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.216718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.216736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.226590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.226670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.226697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.226706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.226712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.127 [2024-07-22 18:10:20.226730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.127 qpair failed and we were unable to recover it. 00:33:16.127 [2024-07-22 18:10:20.236532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.127 [2024-07-22 18:10:20.236603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.127 [2024-07-22 18:10:20.236627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.127 [2024-07-22 18:10:20.236635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.127 [2024-07-22 18:10:20.236647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.236665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.246946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.247061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.247086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.247094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.247101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.247118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.256742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.256839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.256865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.256872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.256878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.256894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.266706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.266783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.266808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.266815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.266821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.266838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.276744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.276851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.276877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.276884] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.276890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.276906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.287048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.287169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.287195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.287202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.287208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.287225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.296711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.296814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.296839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.296846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.296852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.296868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.306841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.306929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.306954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.306962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.306967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.306984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.316882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.316963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.316988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.316995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.317001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.317018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.327068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.327182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.327206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.327213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.327225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.327241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.336853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.336945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.336970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.336977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.336983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.337000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.346985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.347095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.347120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.347127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.347133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.347150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.357038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.357125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.357149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.357157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.357162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.357179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.367324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.128 [2024-07-22 18:10:20.367451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.128 [2024-07-22 18:10:20.367479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.128 [2024-07-22 18:10:20.367486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.128 [2024-07-22 18:10:20.367493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.128 [2024-07-22 18:10:20.367510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.128 qpair failed and we were unable to recover it. 00:33:16.128 [2024-07-22 18:10:20.377071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.129 [2024-07-22 18:10:20.377204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.129 [2024-07-22 18:10:20.377229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.129 [2024-07-22 18:10:20.377236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.129 [2024-07-22 18:10:20.377242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.129 [2024-07-22 18:10:20.377259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.129 qpair failed and we were unable to recover it. 00:33:16.129 [2024-07-22 18:10:20.387108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.129 [2024-07-22 18:10:20.387188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.129 [2024-07-22 18:10:20.387213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.129 [2024-07-22 18:10:20.387220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.129 [2024-07-22 18:10:20.387226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.129 [2024-07-22 18:10:20.387243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.129 qpair failed and we were unable to recover it. 00:33:16.129 [2024-07-22 18:10:20.397016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.129 [2024-07-22 18:10:20.397102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.129 [2024-07-22 18:10:20.397128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.129 [2024-07-22 18:10:20.397136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.129 [2024-07-22 18:10:20.397142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.129 [2024-07-22 18:10:20.397159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.129 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.407444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.407617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.407644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.407652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.407658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.407676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.417145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.417234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.417259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.417267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.417279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.417296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.427230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.427305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.427329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.427337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.427343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.427367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.437229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.437347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.437378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.437386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.437392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.437409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.447535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.447647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.447672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.447680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.447686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.447703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.457311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.457435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.457459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.457466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.457472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.457489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.467332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.467433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.467457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.467465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.467471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.467487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.477354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.477434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.477459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.477466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.477473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.477489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.487673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.487790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.487815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.487822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.487828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.487845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.497443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.497532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.497556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.497563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.497569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.497586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.507452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.507537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.507561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.507568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.507580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.507597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.517347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.517436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.517460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.392 [2024-07-22 18:10:20.517467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.392 [2024-07-22 18:10:20.517473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.392 [2024-07-22 18:10:20.517489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.392 qpair failed and we were unable to recover it. 00:33:16.392 [2024-07-22 18:10:20.527804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.392 [2024-07-22 18:10:20.527910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.392 [2024-07-22 18:10:20.527934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.527941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.527947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.527963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.537630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.537719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.537743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.537751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.537759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.537776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.547581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.547702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.547727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.547734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.547740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.547756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.557508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.557586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.557611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.557617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.557623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.557639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.567904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.568030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.568054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.568061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.568067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.568084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.577703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.577793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.577817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.577824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.577830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.577846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.587582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.587666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.587689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.587697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.587703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.587719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.597728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.597806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.597830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.597846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.597852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.597868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.608026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.608140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.608164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.608172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.608178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.608194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.617697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.617793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.617817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.617824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.617830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.617847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.627809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.627894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.627918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.627926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.627931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.627948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.637850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.637941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.637966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.637978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.637985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.638002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.648154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.648291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.648316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.648323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.648329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.393 [2024-07-22 18:10:20.648346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.393 qpair failed and we were unable to recover it. 00:33:16.393 [2024-07-22 18:10:20.657819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.393 [2024-07-22 18:10:20.657915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.393 [2024-07-22 18:10:20.657940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.393 [2024-07-22 18:10:20.657948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.393 [2024-07-22 18:10:20.657954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.394 [2024-07-22 18:10:20.657971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.394 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.667998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.668093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.668118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.668126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.668131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.668149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.677968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.678050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.678076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.678087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.678093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.678110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.688295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.688416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.688444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.688459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.688465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.688482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.698044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.698139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.698164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.698171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.698177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.698194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.708080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.708195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.708220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.708228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.708233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.708250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.718125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.718209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.718234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.718241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.718247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.718264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.656 [2024-07-22 18:10:20.728462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.656 [2024-07-22 18:10:20.728576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.656 [2024-07-22 18:10:20.728603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.656 [2024-07-22 18:10:20.728611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.656 [2024-07-22 18:10:20.728616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.656 [2024-07-22 18:10:20.728634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.656 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.738183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.738280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.738304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.738312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.738318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.738336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.748239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.748321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.748345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.748358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.748365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.748382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.758265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.758376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.758400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.758407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.758414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.758430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.768555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.768715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.768741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.768748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.768755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.768771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.778300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.778410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.778435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.778457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.778464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.778480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.788361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.788434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.788458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.788466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.788472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.788488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.798419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.798521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.798546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.798553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.798559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.798575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.808710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.808819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.808843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.808850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.808857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.808873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.818453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.818549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.818574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.818581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.818587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.818604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.828484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.828570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.828594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.828601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.828607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.828623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.838516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.838599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.838623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.838630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.838636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.838653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.848831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.657 [2024-07-22 18:10:20.848952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.657 [2024-07-22 18:10:20.848976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.657 [2024-07-22 18:10:20.848984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.657 [2024-07-22 18:10:20.848990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.657 [2024-07-22 18:10:20.849006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.657 qpair failed and we were unable to recover it. 00:33:16.657 [2024-07-22 18:10:20.858471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.858570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.858593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.858600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.858607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.858623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.868569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.868649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.868674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.868688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.868694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.868710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.878609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.878691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.878715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.878723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.878729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.878745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.889004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.889140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.889164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.889172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.889178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.889196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.898731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.898821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.898846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.898853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.898859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.898875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.908741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.908824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.908849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.908856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.908861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.908878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.918708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.918786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.918811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.918818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.918824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.918841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.658 [2024-07-22 18:10:20.928997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.658 [2024-07-22 18:10:20.929114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.658 [2024-07-22 18:10:20.929140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.658 [2024-07-22 18:10:20.929147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.658 [2024-07-22 18:10:20.929153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.658 [2024-07-22 18:10:20.929170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.658 qpair failed and we were unable to recover it. 00:33:16.921 [2024-07-22 18:10:20.938912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.921 [2024-07-22 18:10:20.939010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.921 [2024-07-22 18:10:20.939036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.921 [2024-07-22 18:10:20.939044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.921 [2024-07-22 18:10:20.939050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.921 [2024-07-22 18:10:20.939067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.921 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:20.948911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:20.948993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:20.949017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:20.949024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:20.949030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:20.949047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:20.958946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:20.959025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:20.959056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:20.959063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:20.959069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:20.959085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:20.969283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:20.969406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:20.969431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:20.969438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:20.969445] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:20.969461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:20.979039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:20.979132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:20.979156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:20.979164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:20.979170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:20.979186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:20.989055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:20.989142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:20.989166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:20.989173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:20.989179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:20.989195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:20.999089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:20.999175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:20.999199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:20.999206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:20.999213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:20.999230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.009446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.009582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.009606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.009613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.009619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.009635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.019148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.019249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.019273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.019280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.019287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.019303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.029188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.029264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.029288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.029295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.029301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.029318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.039245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.039327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.039357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.039365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.039372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.039388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.049597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.049718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.049748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.049755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.049761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.049778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.059327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.059430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.059454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.059462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.059468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.059484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.069327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.069402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.069427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.922 [2024-07-22 18:10:21.069434] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.922 [2024-07-22 18:10:21.069440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.922 [2024-07-22 18:10:21.069457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.922 qpair failed and we were unable to recover it. 00:33:16.922 [2024-07-22 18:10:21.079407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.922 [2024-07-22 18:10:21.079494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.922 [2024-07-22 18:10:21.079518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.079525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.079531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.079548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.089772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.089896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.089921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.089928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.089934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.089957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.099470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.099566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.099589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.099596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.099603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.099619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.109513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.109595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.109618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.109625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.109631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.109647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.119671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.119808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.119833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.119840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.119846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.119862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.130021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.130144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.130168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.130176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.130182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.130199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.139657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.139780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.139811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.139819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.139826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.139842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.149752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.149864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.149888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.149896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.149901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.149919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.159689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.159767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.159792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.159799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.159805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.159822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.170033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.170195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.170220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.170230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.170239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.170258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.179686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.179776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.179801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.179809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.179815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.179839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:16.923 [2024-07-22 18:10:21.189750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:16.923 [2024-07-22 18:10:21.189833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:16.923 [2024-07-22 18:10:21.189857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:16.923 [2024-07-22 18:10:21.189868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:16.923 [2024-07-22 18:10:21.189877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:16.923 [2024-07-22 18:10:21.189895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:16.923 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.199813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.199887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.199911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.199918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.199924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.199940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.210049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.210204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.210229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.210236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.210242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.210259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.219956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.220083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.220110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.220118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.220124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.220142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.229911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.229994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.230025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.230033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.230039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.230057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.239949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.240026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.240051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.240058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.240064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.240081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.250265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.250414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.250439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.250447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.250452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.250471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.259945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.260043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.260068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.260075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.260081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.260098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.269991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.270081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.270107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.270114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.270120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.270146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.280121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.280199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.280224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.280232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.280238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.280256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.290464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.290584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.290609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.290617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.186 [2024-07-22 18:10:21.290623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.186 [2024-07-22 18:10:21.290640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.186 qpair failed and we were unable to recover it. 00:33:17.186 [2024-07-22 18:10:21.300178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.186 [2024-07-22 18:10:21.300265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.186 [2024-07-22 18:10:21.300289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.186 [2024-07-22 18:10:21.300297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.300303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.300320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.310199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.310287] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.310311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.310319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.310325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.310342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.320235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.320316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.320346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.320361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.320367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.320384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.330588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.330715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.330740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.330747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.330753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.330769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.340193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.340289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.340314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.340321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.340327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.340344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.350240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.350320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.350344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.350358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.350364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.350381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.360356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.360483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.360508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.360516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.360528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.360545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.370547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.370677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.370703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.370710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.370716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.370733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.380423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.380514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.380537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.380544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.380550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.380567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.390445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.390518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.390543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.390550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.390556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.390573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.400467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.400536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.400559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.400567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.400573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.400590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.410789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.410901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.410929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.410937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.410942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.410958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.420566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.420655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.420678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.420685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.420691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.187 [2024-07-22 18:10:21.420706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.187 qpair failed and we were unable to recover it. 00:33:17.187 [2024-07-22 18:10:21.430566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.187 [2024-07-22 18:10:21.430634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.187 [2024-07-22 18:10:21.430655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.187 [2024-07-22 18:10:21.430663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.187 [2024-07-22 18:10:21.430669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.188 [2024-07-22 18:10:21.430683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.188 qpair failed and we were unable to recover it. 00:33:17.188 [2024-07-22 18:10:21.440693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.188 [2024-07-22 18:10:21.440774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.188 [2024-07-22 18:10:21.440794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.188 [2024-07-22 18:10:21.440802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.188 [2024-07-22 18:10:21.440808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.188 [2024-07-22 18:10:21.440822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.188 qpair failed and we were unable to recover it. 00:33:17.188 [2024-07-22 18:10:21.450952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.188 [2024-07-22 18:10:21.451052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.188 [2024-07-22 18:10:21.451073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.188 [2024-07-22 18:10:21.451080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.188 [2024-07-22 18:10:21.451091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.188 [2024-07-22 18:10:21.451105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.188 qpair failed and we were unable to recover it. 00:33:17.450 [2024-07-22 18:10:21.460677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.450 [2024-07-22 18:10:21.460763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.450 [2024-07-22 18:10:21.460783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.450 [2024-07-22 18:10:21.460790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.450 [2024-07-22 18:10:21.460797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.450 [2024-07-22 18:10:21.460812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.450 qpair failed and we were unable to recover it. 00:33:17.450 [2024-07-22 18:10:21.470680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.450 [2024-07-22 18:10:21.470747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.450 [2024-07-22 18:10:21.470766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.450 [2024-07-22 18:10:21.470773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.450 [2024-07-22 18:10:21.470779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.450 [2024-07-22 18:10:21.470793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.450 qpair failed and we were unable to recover it. 00:33:17.450 [2024-07-22 18:10:21.480613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.450 [2024-07-22 18:10:21.480679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.450 [2024-07-22 18:10:21.480698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.450 [2024-07-22 18:10:21.480705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.450 [2024-07-22 18:10:21.480710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.450 [2024-07-22 18:10:21.480724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.450 qpair failed and we were unable to recover it. 00:33:17.450 [2024-07-22 18:10:21.490936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.450 [2024-07-22 18:10:21.491042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.450 [2024-07-22 18:10:21.491060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.450 [2024-07-22 18:10:21.491067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.450 [2024-07-22 18:10:21.491073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.450 [2024-07-22 18:10:21.491086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.450 qpair failed and we were unable to recover it. 00:33:17.450 [2024-07-22 18:10:21.500703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.500836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.500854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.500861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.500866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.500881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.510709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.510778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.510797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.510804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.510810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.510823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.520876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.520946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.520963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.520970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.520976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.520989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.531057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.531161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.531178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.531185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.531191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.531204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.541001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.541115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.541133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.541139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.541149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.541162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.550969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.551037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.551055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.551061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.551068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.551083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.561010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.561085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.561102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.561109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.561114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.561128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.571297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.571428] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.571444] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.571450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.571456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.571469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.581147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.581253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.581269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.581276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.581282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.581295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.591060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.591132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.591148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.591155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.591161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.591174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.601022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.601092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.601109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.601116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.601122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.601138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.611475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.611584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.611600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.611606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.611612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.611625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.621260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.621367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.621383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.621390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.451 [2024-07-22 18:10:21.621395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.451 [2024-07-22 18:10:21.621408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.451 qpair failed and we were unable to recover it. 00:33:17.451 [2024-07-22 18:10:21.631233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.451 [2024-07-22 18:10:21.631300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.451 [2024-07-22 18:10:21.631315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.451 [2024-07-22 18:10:21.631322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.631331] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.631344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.641251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.641356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.641372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.641379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.641384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.641397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.651589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.651737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.651751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.651758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.651764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.651776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.661344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.661479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.661495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.661502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.661507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.661520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.671342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.671444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.671460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.671467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.671472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.671485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.681302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.681388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.681404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.681411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.681416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.681429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.691754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.691866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.691882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.691889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.691894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.691907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.701484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.701589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.701604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.701610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.701616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.701629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.711378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.711452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.711467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.711475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.711480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.711493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.452 [2024-07-22 18:10:21.721430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.452 [2024-07-22 18:10:21.721503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.452 [2024-07-22 18:10:21.721518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.452 [2024-07-22 18:10:21.721529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.452 [2024-07-22 18:10:21.721535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.452 [2024-07-22 18:10:21.721547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.452 qpair failed and we were unable to recover it. 00:33:17.714 [2024-07-22 18:10:21.731889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.714 [2024-07-22 18:10:21.731998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.714 [2024-07-22 18:10:21.732013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.714 [2024-07-22 18:10:21.732019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.714 [2024-07-22 18:10:21.732025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.714 [2024-07-22 18:10:21.732038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.714 qpair failed and we were unable to recover it. 00:33:17.714 [2024-07-22 18:10:21.741646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.714 [2024-07-22 18:10:21.741725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.714 [2024-07-22 18:10:21.741741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.714 [2024-07-22 18:10:21.741747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.714 [2024-07-22 18:10:21.741752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.714 [2024-07-22 18:10:21.741765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.714 qpair failed and we were unable to recover it. 00:33:17.714 [2024-07-22 18:10:21.751675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.751743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.751758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.751765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.751770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.751782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.761687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.761752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.761767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.761773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.761779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.761791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.772067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.772190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.772205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.772212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.772217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.772230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.781746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.781824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.781840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.781846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.781851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.781864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.791780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.791850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.791864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.791871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.791876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.791889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.801895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.801964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.801978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.801985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.801990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.802003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.812065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.812173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.812188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.812198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.812204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.812216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.821941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.822022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.822037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.822043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.822049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.822061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.831847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.831921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.831936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.831943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.831949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.831961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.841867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.841954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.841970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.841976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.841981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.841994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.852300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.852423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.852439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.852446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.852452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.852465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.862020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.862122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.862138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.862144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.862150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.862163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.871964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.872032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.872047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.872054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.872059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.715 [2024-07-22 18:10:21.872071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.715 qpair failed and we were unable to recover it. 00:33:17.715 [2024-07-22 18:10:21.882115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.715 [2024-07-22 18:10:21.882189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.715 [2024-07-22 18:10:21.882205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.715 [2024-07-22 18:10:21.882211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.715 [2024-07-22 18:10:21.882217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.882230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.892424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.892534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.892549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.892557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.892563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.892576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.902179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.902266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.902281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.902291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.902297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.902309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.912103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.912171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.912186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.912193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.912198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.912210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.922237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.922302] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.922317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.922324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.922329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.922341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.932575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.932682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.932697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.932704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.932710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.932722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.942299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.942400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.942416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.942423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.942429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.942442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.952331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.952420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.952436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.952443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.952449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.952461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.962361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.962437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.962452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.962459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.962465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.962477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.972727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.972837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.972851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.972858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.972863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.972876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.716 [2024-07-22 18:10:21.982395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.716 [2024-07-22 18:10:21.982478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.716 [2024-07-22 18:10:21.982492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.716 [2024-07-22 18:10:21.982499] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.716 [2024-07-22 18:10:21.982505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.716 [2024-07-22 18:10:21.982517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.716 qpair failed and we were unable to recover it. 00:33:17.978 [2024-07-22 18:10:21.992426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.978 [2024-07-22 18:10:21.992495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.978 [2024-07-22 18:10:21.992510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.978 [2024-07-22 18:10:21.992520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.978 [2024-07-22 18:10:21.992525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.978 [2024-07-22 18:10:21.992538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-07-22 18:10:22.002504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.978 [2024-07-22 18:10:22.002570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.978 [2024-07-22 18:10:22.002586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.978 [2024-07-22 18:10:22.002592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.978 [2024-07-22 18:10:22.002598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.978 [2024-07-22 18:10:22.002611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-07-22 18:10:22.012813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.978 [2024-07-22 18:10:22.012938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.978 [2024-07-22 18:10:22.012953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.978 [2024-07-22 18:10:22.012960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.978 [2024-07-22 18:10:22.012965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.978 [2024-07-22 18:10:22.012978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-07-22 18:10:22.022574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.978 [2024-07-22 18:10:22.022653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.978 [2024-07-22 18:10:22.022669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.978 [2024-07-22 18:10:22.022675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.978 [2024-07-22 18:10:22.022680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.978 [2024-07-22 18:10:22.022693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-07-22 18:10:22.032564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.978 [2024-07-22 18:10:22.032659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.978 [2024-07-22 18:10:22.032674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.978 [2024-07-22 18:10:22.032680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.978 [2024-07-22 18:10:22.032686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.978 [2024-07-22 18:10:22.032699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.978 qpair failed and we were unable to recover it. 00:33:17.978 [2024-07-22 18:10:22.042593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.978 [2024-07-22 18:10:22.042660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.978 [2024-07-22 18:10:22.042675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.978 [2024-07-22 18:10:22.042682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.978 [2024-07-22 18:10:22.042687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.042700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.052942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.053052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.053067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.053073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.053079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.053091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.062694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.062780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.062795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.062802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.062807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.062820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.072755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.072823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.072838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.072844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.072850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.072862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.082744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.082812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.082827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.082839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.082845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.082857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.092983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.093092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.093107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.093114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.093119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.093131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.102824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.102903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.102918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.102925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.102930] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.102942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.112732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.112801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.112816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.112822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.112828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.112840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.122863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.122930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.122945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.122951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.122957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.122969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.133059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.133158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.133173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.133179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.133185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.133197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.142925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.143002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.143017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.143023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.143029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.143041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.152874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.152959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.152974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.152981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.152986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.152998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.163018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.163093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.163108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.163115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.163121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.163133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.173187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.979 [2024-07-22 18:10:22.173283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.979 [2024-07-22 18:10:22.173301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.979 [2024-07-22 18:10:22.173308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.979 [2024-07-22 18:10:22.173313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.979 [2024-07-22 18:10:22.173326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.979 qpair failed and we were unable to recover it. 00:33:17.979 [2024-07-22 18:10:22.182951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.183033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.183048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.183055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.183060] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.183073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-07-22 18:10:22.193115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.193184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.193198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.193205] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.193210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.193223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-07-22 18:10:22.203112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.203188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.203204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.203210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.203216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.203229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-07-22 18:10:22.213481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.213609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.213624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.213631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.213636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.213650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-07-22 18:10:22.223190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.223298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.223313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.223320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.223326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.223339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-07-22 18:10:22.233230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.233315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.233330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.233337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.233342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.233358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:17.980 [2024-07-22 18:10:22.243195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:17.980 [2024-07-22 18:10:22.243272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:17.980 [2024-07-22 18:10:22.243287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:17.980 [2024-07-22 18:10:22.243293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:17.980 [2024-07-22 18:10:22.243299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:17.980 [2024-07-22 18:10:22.243311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:17.980 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.253582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.253693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.253708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.253715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.253721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.253733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.263357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.263434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.263452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.263459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.263464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.263477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.273363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.273430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.273445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.273451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.273457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.273469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.283385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.283455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.283471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.283477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.283483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.283496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.293713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.293824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.293838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.293845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.293850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.293863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.303327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.303412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.303426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.303433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.303438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.303455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.313497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.313569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.313585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.313592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.313599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.313614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.323490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.323552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.323567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.323573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.323579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.323591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.333908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.334015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.334029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.334036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.334041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.334054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.343559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.343639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.343655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.343661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.343667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.343682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.353488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.353550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.353569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.353575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.353581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.353594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.242 qpair failed and we were unable to recover it. 00:33:18.242 [2024-07-22 18:10:22.363646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.242 [2024-07-22 18:10:22.363716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.242 [2024-07-22 18:10:22.363731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.242 [2024-07-22 18:10:22.363738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.242 [2024-07-22 18:10:22.363744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.242 [2024-07-22 18:10:22.363756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.373994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.374139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.374154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.374161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.374166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.374179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.383761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.383840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.383855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.383862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.383867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.383880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.393756] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.393818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.393833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.393840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.393846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.393862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.403800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.403868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.403883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.403889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.403894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.403907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.414134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.414236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.414250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.414256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.414262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.414274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.423854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.423934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.423949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.423955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.423961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.423973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.433896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.433966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.433981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.433987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.433993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.434005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.443944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.444016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.444034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.444041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.444046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.444059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.454261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.454373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.454390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.454396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.454402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.454414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.464013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.464103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.464118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.464125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.464131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.464143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.474000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.474072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.474087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.474093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.474099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.474111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.484073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.484176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.484191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.484197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.484203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.484219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.494422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.243 [2024-07-22 18:10:22.494531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.243 [2024-07-22 18:10:22.494546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.243 [2024-07-22 18:10:22.494553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.243 [2024-07-22 18:10:22.494559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.243 [2024-07-22 18:10:22.494571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.243 qpair failed and we were unable to recover it. 00:33:18.243 [2024-07-22 18:10:22.504152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.244 [2024-07-22 18:10:22.504264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.244 [2024-07-22 18:10:22.504278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.244 [2024-07-22 18:10:22.504285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.244 [2024-07-22 18:10:22.504291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.244 [2024-07-22 18:10:22.504303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.244 qpair failed and we were unable to recover it. 00:33:18.244 [2024-07-22 18:10:22.514213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.244 [2024-07-22 18:10:22.514327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.244 [2024-07-22 18:10:22.514342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.244 [2024-07-22 18:10:22.514352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.244 [2024-07-22 18:10:22.514359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.244 [2024-07-22 18:10:22.514371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.244 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.524193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.524305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.524320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.524327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.524333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.524345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.534444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.534558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.534577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.534584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.534589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.534603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.544323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.544410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.544425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.544432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.544437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.544450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.554313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.554382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.554397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.554404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.554409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.554422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.564400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.564467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.564482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.564488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.564494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.564507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.574675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.574779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.574794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.574800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.574809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.574821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.584435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.584520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.584535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.584541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.584547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.506 [2024-07-22 18:10:22.584560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.506 qpair failed and we were unable to recover it. 00:33:18.506 [2024-07-22 18:10:22.594457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.506 [2024-07-22 18:10:22.594522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.506 [2024-07-22 18:10:22.594537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.506 [2024-07-22 18:10:22.594544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.506 [2024-07-22 18:10:22.594549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.594562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.604523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.604594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.604609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.604615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.604621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.604633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.614820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.614929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.614944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.614950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.614956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.614968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.624549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.624684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.624702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.624709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.624714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.624727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.634586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.634656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.634670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.634677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.634682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.634695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.644660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.644777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.644792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.644798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.644804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.644816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.655009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.655126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.655140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.655147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.655153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.655165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.664688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.664769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.664784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.664790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.664799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.664811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.674722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.674788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.674804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.674811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.674817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.674829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.684754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.684822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.684837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.684843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.684849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.684861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.695058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.695164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.695178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.695184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.695190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.695203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.704806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.704899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.704913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.704920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.704926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.704938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.714818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.714894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.714909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.714916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.714921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.714933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.724869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.507 [2024-07-22 18:10:22.724934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.507 [2024-07-22 18:10:22.724948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.507 [2024-07-22 18:10:22.724955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.507 [2024-07-22 18:10:22.724961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.507 [2024-07-22 18:10:22.724973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.507 qpair failed and we were unable to recover it. 00:33:18.507 [2024-07-22 18:10:22.735135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.508 [2024-07-22 18:10:22.735239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.508 [2024-07-22 18:10:22.735254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.508 [2024-07-22 18:10:22.735261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.508 [2024-07-22 18:10:22.735267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.508 [2024-07-22 18:10:22.735279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.508 qpair failed and we were unable to recover it. 00:33:18.508 [2024-07-22 18:10:22.744926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.508 [2024-07-22 18:10:22.745007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.508 [2024-07-22 18:10:22.745022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.508 [2024-07-22 18:10:22.745028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.508 [2024-07-22 18:10:22.745034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.508 [2024-07-22 18:10:22.745046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.508 qpair failed and we were unable to recover it. 00:33:18.508 [2024-07-22 18:10:22.754933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.508 [2024-07-22 18:10:22.754996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.508 [2024-07-22 18:10:22.755011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.508 [2024-07-22 18:10:22.755017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.508 [2024-07-22 18:10:22.755027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.508 [2024-07-22 18:10:22.755039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.508 qpair failed and we were unable to recover it. 00:33:18.508 [2024-07-22 18:10:22.764988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.508 [2024-07-22 18:10:22.765059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.508 [2024-07-22 18:10:22.765082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.508 [2024-07-22 18:10:22.765090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.508 [2024-07-22 18:10:22.765096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.508 [2024-07-22 18:10:22.765113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.508 qpair failed and we were unable to recover it. 00:33:18.508 [2024-07-22 18:10:22.775203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.508 [2024-07-22 18:10:22.775306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.508 [2024-07-22 18:10:22.775323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.508 [2024-07-22 18:10:22.775330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.508 [2024-07-22 18:10:22.775336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.508 [2024-07-22 18:10:22.775353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.508 qpair failed and we were unable to recover it. 00:33:18.770 [2024-07-22 18:10:22.784972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.770 [2024-07-22 18:10:22.785101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.770 [2024-07-22 18:10:22.785116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.770 [2024-07-22 18:10:22.785123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.770 [2024-07-22 18:10:22.785128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.770 [2024-07-22 18:10:22.785141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.770 qpair failed and we were unable to recover it. 00:33:18.770 [2024-07-22 18:10:22.794982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.770 [2024-07-22 18:10:22.795054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.770 [2024-07-22 18:10:22.795071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.770 [2024-07-22 18:10:22.795077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.770 [2024-07-22 18:10:22.795083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.770 [2024-07-22 18:10:22.795099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.770 qpair failed and we were unable to recover it. 00:33:18.770 [2024-07-22 18:10:22.805158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.770 [2024-07-22 18:10:22.805239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.770 [2024-07-22 18:10:22.805254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.770 [2024-07-22 18:10:22.805261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.770 [2024-07-22 18:10:22.805267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.770 [2024-07-22 18:10:22.805280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.770 qpair failed and we were unable to recover it. 00:33:18.770 [2024-07-22 18:10:22.815367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.770 [2024-07-22 18:10:22.815469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.770 [2024-07-22 18:10:22.815484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.770 [2024-07-22 18:10:22.815491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.770 [2024-07-22 18:10:22.815496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.770 [2024-07-22 18:10:22.815509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.770 qpair failed and we were unable to recover it. 00:33:18.770 [2024-07-22 18:10:22.825283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.770 [2024-07-22 18:10:22.825375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.770 [2024-07-22 18:10:22.825391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.770 [2024-07-22 18:10:22.825397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.825403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.825416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.835238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.835304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.835320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.835326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.835332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.835344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.845176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.845243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.845258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.845265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.845276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.845290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.855484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.855618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.855634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.855640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.855646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.855659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.865376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.865462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.865477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.865483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.865489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.865502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.875379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.875440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.875455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.875461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.875467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.875480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.885460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.885579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.885593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.885600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.885606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.885618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.895752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.895866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.895881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.895888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.895893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.895906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.905480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.905586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.905601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.905608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.905614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.905626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.915382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.915456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.915471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.915477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.915483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.915495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.925545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.925634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.925650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.925656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.925662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.925675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.935930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.936063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.936078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.936088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.936094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.936107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.945504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.945588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.945603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.945609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.945615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.945627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.771 [2024-07-22 18:10:22.955658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.771 [2024-07-22 18:10:22.955740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.771 [2024-07-22 18:10:22.955754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.771 [2024-07-22 18:10:22.955761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.771 [2024-07-22 18:10:22.955767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.771 [2024-07-22 18:10:22.955779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.771 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:22.965731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:22.965805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:22.965821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:22.965827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:22.965832] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:22.965845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:22.976029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:22.976129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:22.976144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:22.976150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:22.976156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:22.976168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:22.985766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:22.985848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:22.985863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:22.985869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:22.985875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:22.985887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:22.995708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:22.995779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:22.995793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:22.995800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:22.995805] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:22.995818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:23.005842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:23.005928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:23.005943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:23.005951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:23.005957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:23.005970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:23.016176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:23.016292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:23.016307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:23.016313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:23.016320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:23.016332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:23.025829] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:23.025912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:23.025928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:23.025938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:23.025944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:23.025957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:18.772 [2024-07-22 18:10:23.035957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:18.772 [2024-07-22 18:10:23.036070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:18.772 [2024-07-22 18:10:23.036085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:18.772 [2024-07-22 18:10:23.036092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:18.772 [2024-07-22 18:10:23.036098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:18.772 [2024-07-22 18:10:23.036110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:18.772 qpair failed and we were unable to recover it. 00:33:19.033 [2024-07-22 18:10:23.045980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.033 [2024-07-22 18:10:23.046044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.033 [2024-07-22 18:10:23.046058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.033 [2024-07-22 18:10:23.046065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.033 [2024-07-22 18:10:23.046070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.033 [2024-07-22 18:10:23.046083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.033 qpair failed and we were unable to recover it. 00:33:19.033 [2024-07-22 18:10:23.056177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.033 [2024-07-22 18:10:23.056289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.033 [2024-07-22 18:10:23.056304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.033 [2024-07-22 18:10:23.056311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.033 [2024-07-22 18:10:23.056317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.033 [2024-07-22 18:10:23.056330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.033 qpair failed and we were unable to recover it. 00:33:19.033 [2024-07-22 18:10:23.065955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.033 [2024-07-22 18:10:23.066038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.033 [2024-07-22 18:10:23.066053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.033 [2024-07-22 18:10:23.066060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.033 [2024-07-22 18:10:23.066065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.033 [2024-07-22 18:10:23.066079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.033 qpair failed and we were unable to recover it. 00:33:19.033 [2024-07-22 18:10:23.076104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.076206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.076221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.076228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.076234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.076246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.086092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.086159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.086174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.086181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.086186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.086199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.096398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.096507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.096521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.096528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.096534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.096546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.106186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.106269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.106283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.106290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.106295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.106308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.116160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.116231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.116247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.116257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.116262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.116274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.126216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.126282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.126297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.126304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.126310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.126322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.136557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.136662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.136677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.136683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.136689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.136701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.146267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.146346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.146365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.146371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.146377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.146389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.156336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.156416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.156432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.156438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.156444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.156456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.166360] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.166432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.166446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.166453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.166459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.166471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.176611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.176716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.176731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.176737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.176743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.176756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.186455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.186538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.186553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.186560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.186565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.186578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.196476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.196541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.196556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.034 [2024-07-22 18:10:23.196562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.034 [2024-07-22 18:10:23.196568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.034 [2024-07-22 18:10:23.196580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.034 qpair failed and we were unable to recover it. 00:33:19.034 [2024-07-22 18:10:23.206523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.034 [2024-07-22 18:10:23.206590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.034 [2024-07-22 18:10:23.206605] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.206615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.206620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.206633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.216849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.216959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.216973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.216980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.216985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.216998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.226616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.226692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.226709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.226716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.226722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.226736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.236649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.236756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.236772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.236778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.236784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.236796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.246631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.246750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.246765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.246772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.246777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.246790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.257017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.257123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.257138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.257144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.257150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.257163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.266750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.266828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.266843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.266850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.266855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.266868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.276799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.276916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.276931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.276937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.276943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.276955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.286800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.286867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.286882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.286888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.286894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.286906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.297124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.297231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.297246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.297257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.297263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.297275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.035 [2024-07-22 18:10:23.306864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.035 [2024-07-22 18:10:23.306956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.035 [2024-07-22 18:10:23.306971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.035 [2024-07-22 18:10:23.306978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.035 [2024-07-22 18:10:23.306984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.035 [2024-07-22 18:10:23.306996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.035 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.316893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.316958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.316974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.316980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.316986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.316998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.326919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.326984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.326999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.327006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.327012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.327024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.337293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.337436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.337453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.337460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.337465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.337478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.346978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.347060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.347076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.347082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.347088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.347100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.357033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.357099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.357114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.357121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.357127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.357139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.367022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.367092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.367107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.367114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.367120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.367132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.377385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.377491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.377506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.377513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.377518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.377531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.387121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.387204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.387223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.387230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.387236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.387249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.397148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.397229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.397244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.297 [2024-07-22 18:10:23.397250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.297 [2024-07-22 18:10:23.397256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.297 [2024-07-22 18:10:23.397268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.297 qpair failed and we were unable to recover it. 00:33:19.297 [2024-07-22 18:10:23.407172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.297 [2024-07-22 18:10:23.407243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.297 [2024-07-22 18:10:23.407259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.407265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.407271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.407283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.417509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.417616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.417631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.417638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.417643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.417656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.427160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.427235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.427250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.427256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.427262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.427274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.437270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.437337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.437356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.437363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.437369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.437382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.447294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.447364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.447380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.447388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.447393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.447406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.457663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.457764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.457780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.457786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.457792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.457805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.467399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.467483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.467498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.467505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.467511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.467523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.477415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.477494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.477512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.477519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.477525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.477538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.487419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.487546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.487561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.487568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.487573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.487586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.497814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.497960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.497976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.497983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.497989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.498001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.507540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.507621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.507636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.507642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.507647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.507660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.517461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.517527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.517542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.517549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.517555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.517571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.527580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.527649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.527665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.527672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.527678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.298 [2024-07-22 18:10:23.527691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.298 qpair failed and we were unable to recover it. 00:33:19.298 [2024-07-22 18:10:23.537921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.298 [2024-07-22 18:10:23.538037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.298 [2024-07-22 18:10:23.538052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.298 [2024-07-22 18:10:23.538058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.298 [2024-07-22 18:10:23.538064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.299 [2024-07-22 18:10:23.538077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.299 qpair failed and we were unable to recover it. 00:33:19.299 [2024-07-22 18:10:23.547692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.299 [2024-07-22 18:10:23.547770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.299 [2024-07-22 18:10:23.547785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.299 [2024-07-22 18:10:23.547792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.299 [2024-07-22 18:10:23.547797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.299 [2024-07-22 18:10:23.547810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.299 qpair failed and we were unable to recover it. 00:33:19.299 [2024-07-22 18:10:23.557698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.299 [2024-07-22 18:10:23.557802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.299 [2024-07-22 18:10:23.557817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.299 [2024-07-22 18:10:23.557824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.299 [2024-07-22 18:10:23.557829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.299 [2024-07-22 18:10:23.557842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.299 qpair failed and we were unable to recover it. 00:33:19.299 [2024-07-22 18:10:23.567742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.299 [2024-07-22 18:10:23.567817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.299 [2024-07-22 18:10:23.567838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.299 [2024-07-22 18:10:23.567844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.299 [2024-07-22 18:10:23.567850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.299 [2024-07-22 18:10:23.567862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.299 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.578067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.578180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.578195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.578202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.578207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.578219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.587825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.587907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.587922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.587929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.587934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.587947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.597864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.597971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.597986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.597993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.597998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.598011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.607782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.607864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.607880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.607886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.607892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.607908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.618185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.618320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.618338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.618345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.618358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.618372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.627918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.627993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.628009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.628016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.628022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.628035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.637880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.637973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.637989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.637996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.638004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.560 [2024-07-22 18:10:23.638018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.560 qpair failed and we were unable to recover it. 00:33:19.560 [2024-07-22 18:10:23.648023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.560 [2024-07-22 18:10:23.648091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.560 [2024-07-22 18:10:23.648106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.560 [2024-07-22 18:10:23.648113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.560 [2024-07-22 18:10:23.648118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.648131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.658310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.658471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.658490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.658497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.658503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.658516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.668065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.668145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.668160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.668167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.668173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.668185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.677980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.678049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.678064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.678070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.678076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.678088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.688126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.688197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.688212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.688219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.688225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.688237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.698454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.698562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.698577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.698584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.698590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.698606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.708168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.708246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.708261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.708268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.708274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.708286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.718195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.718261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.718276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.718283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.718289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.718301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.728209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.728277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.728292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.728299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.728304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.728317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.738455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.738555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.738570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.738577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.738582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.738595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.748334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.748426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.748445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.748452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.748457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.748470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.758341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.758411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.758427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.758433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.758439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.758451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.768392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.768479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.768494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.768500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.768506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.768519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.778726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.778832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.561 [2024-07-22 18:10:23.778846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.561 [2024-07-22 18:10:23.778853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.561 [2024-07-22 18:10:23.778858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.561 [2024-07-22 18:10:23.778871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.561 qpair failed and we were unable to recover it. 00:33:19.561 [2024-07-22 18:10:23.788421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.561 [2024-07-22 18:10:23.788502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.562 [2024-07-22 18:10:23.788517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.562 [2024-07-22 18:10:23.788524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.562 [2024-07-22 18:10:23.788532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.562 [2024-07-22 18:10:23.788545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.562 qpair failed and we were unable to recover it. 00:33:19.562 [2024-07-22 18:10:23.798458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.562 [2024-07-22 18:10:23.798542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.562 [2024-07-22 18:10:23.798557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.562 [2024-07-22 18:10:23.798564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.562 [2024-07-22 18:10:23.798569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.562 [2024-07-22 18:10:23.798582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.562 qpair failed and we were unable to recover it. 00:33:19.562 [2024-07-22 18:10:23.808493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.562 [2024-07-22 18:10:23.808569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.562 [2024-07-22 18:10:23.808584] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.562 [2024-07-22 18:10:23.808591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.562 [2024-07-22 18:10:23.808596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.562 [2024-07-22 18:10:23.808608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.562 qpair failed and we were unable to recover it. 00:33:19.562 [2024-07-22 18:10:23.818737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.562 [2024-07-22 18:10:23.818874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.562 [2024-07-22 18:10:23.818889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.562 [2024-07-22 18:10:23.818895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.562 [2024-07-22 18:10:23.818901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.562 [2024-07-22 18:10:23.818913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.562 qpair failed and we were unable to recover it. 00:33:19.562 [2024-07-22 18:10:23.828482] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.562 [2024-07-22 18:10:23.828585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.562 [2024-07-22 18:10:23.828600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.562 [2024-07-22 18:10:23.828606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.562 [2024-07-22 18:10:23.828612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.562 [2024-07-22 18:10:23.828624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.562 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.838618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.838681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.838700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.838706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.838712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.838724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.848586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.848655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.848670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.848677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.848682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.848695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.858960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.859064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.859079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.859086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.859092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.859104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.868587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.868712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.868728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.868734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.868740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.868754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.878702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.878769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.878785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.878791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.878809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.878822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.888718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.888784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.888799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.888805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.888811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.888823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.899118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.899226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.899240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.899247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.899252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.899265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.908811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.908887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.908902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.908908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.908914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.908926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.918882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.918986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.919001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.919008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.919013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.919025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 [2024-07-22 18:10:23.928874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.824 [2024-07-22 18:10:23.929027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.824 [2024-07-22 18:10:23.929041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.824 [2024-07-22 18:10:23.929048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.824 [2024-07-22 18:10:23.929054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12dc8b0 00:33:19.824 [2024-07-22 18:10:23.929066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:19.824 qpair failed and we were unable to recover it. 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Read completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.824 Write completed with error (sct=0, sc=8) 00:33:19.824 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 [2024-07-22 18:10:23.929402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:19.825 [2024-07-22 18:10:23.939195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.825 [2024-07-22 18:10:23.939330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.825 [2024-07-22 18:10:23.939358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.825 [2024-07-22 18:10:23.939368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.825 [2024-07-22 18:10:23.939376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae70000b90 00:33:19.825 [2024-07-22 18:10:23.939394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:19.825 qpair failed and we were unable to recover it. 00:33:19.825 [2024-07-22 18:10:23.948928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.825 [2024-07-22 18:10:23.949044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.825 [2024-07-22 18:10:23.949070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.825 [2024-07-22 18:10:23.949078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.825 [2024-07-22 18:10:23.949084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae70000b90 00:33:19.825 [2024-07-22 18:10:23.949102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:19.825 qpair failed and we were unable to recover it. 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 [2024-07-22 18:10:23.949472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.825 [2024-07-22 18:10:23.959009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.825 [2024-07-22 18:10:23.959076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.825 [2024-07-22 18:10:23.959092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.825 [2024-07-22 18:10:23.959098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.825 [2024-07-22 18:10:23.959102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae68000b90 00:33:19.825 [2024-07-22 18:10:23.959116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.825 qpair failed and we were unable to recover it. 00:33:19.825 [2024-07-22 18:10:23.968977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.825 [2024-07-22 18:10:23.969040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.825 [2024-07-22 18:10:23.969054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.825 [2024-07-22 18:10:23.969062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.825 [2024-07-22 18:10:23.969067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae68000b90 00:33:19.825 [2024-07-22 18:10:23.969079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:19.825 qpair failed and we were unable to recover it. 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Write completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 Read completed with error (sct=0, sc=8) 00:33:19.825 starting I/O failed 00:33:19.825 [2024-07-22 18:10:23.969462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:19.825 [2024-07-22 18:10:23.979314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.826 [2024-07-22 18:10:23.979445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.826 [2024-07-22 18:10:23.979463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.826 [2024-07-22 18:10:23.979471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.826 [2024-07-22 18:10:23.979477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae60000b90 00:33:19.826 [2024-07-22 18:10:23.979493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-22 18:10:23.989077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:19.826 [2024-07-22 18:10:23.989162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:19.826 [2024-07-22 18:10:23.989177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:19.826 [2024-07-22 18:10:23.989184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:19.826 [2024-07-22 18:10:23.989190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fae60000b90 00:33:19.826 [2024-07-22 18:10:23.989208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:19.826 qpair failed and we were unable to recover it. 00:33:19.826 [2024-07-22 18:10:23.989580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12da620 is same with the state(5) to be set 00:33:19.826 [2024-07-22 18:10:23.989890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12da620 (9): Bad file descriptor 00:33:19.826 Initializing NVMe Controllers 00:33:19.826 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:19.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:19.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:19.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:19.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:19.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:19.826 Initialization complete. Launching workers. 00:33:19.826 Starting thread on core 1 00:33:19.826 Starting thread on core 2 00:33:19.826 Starting thread on core 3 00:33:19.826 Starting thread on core 0 00:33:19.826 18:10:23 -- host/target_disconnect.sh@59 -- # sync 00:33:19.826 00:33:19.826 real 0m11.306s 00:33:19.826 user 0m21.441s 00:33:19.826 sys 0m3.868s 00:33:19.826 18:10:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.826 18:10:23 -- common/autotest_common.sh@10 -- # set +x 00:33:19.826 ************************************ 00:33:19.826 END TEST nvmf_target_disconnect_tc2 00:33:19.826 ************************************ 00:33:19.826 18:10:24 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:33:19.826 18:10:24 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:19.826 18:10:24 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:33:19.826 18:10:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:19.826 18:10:24 -- nvmf/common.sh@116 -- # sync 00:33:19.826 18:10:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:19.826 18:10:24 -- nvmf/common.sh@119 -- # set +e 00:33:19.826 18:10:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:19.826 18:10:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:19.826 rmmod nvme_tcp 00:33:19.826 rmmod nvme_fabrics 00:33:19.826 rmmod nvme_keyring 00:33:19.826 18:10:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:19.826 18:10:24 -- nvmf/common.sh@123 -- # set -e 00:33:19.826 18:10:24 -- nvmf/common.sh@124 -- # return 0 00:33:19.826 18:10:24 -- nvmf/common.sh@477 -- # '[' -n 1890572 ']' 00:33:19.826 18:10:24 -- nvmf/common.sh@478 -- # killprocess 1890572 00:33:19.826 18:10:24 -- common/autotest_common.sh@926 -- # '[' -z 1890572 ']' 00:33:19.826 18:10:24 -- common/autotest_common.sh@930 -- # kill -0 1890572 00:33:20.086 18:10:24 -- common/autotest_common.sh@931 -- # uname 00:33:20.086 18:10:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:20.086 18:10:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1890572 00:33:20.086 18:10:24 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:33:20.086 18:10:24 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:33:20.086 18:10:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1890572' 00:33:20.086 killing process with pid 1890572 00:33:20.087 18:10:24 -- common/autotest_common.sh@945 -- # kill 1890572 00:33:20.087 18:10:24 -- common/autotest_common.sh@950 -- # wait 1890572 00:33:20.346 18:10:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:33:20.347 18:10:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:20.347 18:10:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:20.347 18:10:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:20.347 18:10:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:20.347 18:10:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.347 18:10:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:20.347 18:10:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.256 18:10:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:22.256 00:33:22.256 real 0m22.343s 00:33:22.256 user 0m48.600s 00:33:22.256 sys 0m10.456s 00:33:22.256 18:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:22.256 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.256 ************************************ 00:33:22.256 END TEST nvmf_target_disconnect 00:33:22.256 ************************************ 00:33:22.518 18:10:26 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:33:22.518 18:10:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:22.518 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.518 18:10:26 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:33:22.518 00:33:22.518 real 25m40.756s 00:33:22.518 user 66m3.816s 00:33:22.518 sys 7m5.728s 00:33:22.518 18:10:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:22.518 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.518 ************************************ 00:33:22.518 END TEST nvmf_tcp 00:33:22.518 ************************************ 00:33:22.518 18:10:26 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:33:22.518 18:10:26 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:22.518 18:10:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:22.518 18:10:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:22.518 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.518 ************************************ 00:33:22.518 START TEST spdkcli_nvmf_tcp 00:33:22.518 ************************************ 00:33:22.518 18:10:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:22.518 * Looking for test storage... 00:33:22.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:22.518 18:10:26 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:22.518 18:10:26 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.518 18:10:26 -- nvmf/common.sh@7 -- # uname -s 00:33:22.518 18:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.518 18:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.518 18:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.518 18:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.518 18:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.518 18:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.518 18:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.518 18:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.518 18:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.518 18:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.518 18:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:22.518 18:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:22.518 18:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.518 18:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.518 18:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.518 18:10:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.518 18:10:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.518 18:10:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.518 18:10:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.518 18:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.518 18:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.518 18:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.518 18:10:26 -- paths/export.sh@5 -- # export PATH 00:33:22.518 18:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.518 18:10:26 -- nvmf/common.sh@46 -- # : 0 00:33:22.518 18:10:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:22.518 18:10:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:22.518 18:10:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:22.518 18:10:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.518 18:10:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.518 18:10:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:22.518 18:10:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:22.518 18:10:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:22.518 18:10:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:22.518 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.518 18:10:26 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:22.518 18:10:26 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1892160 00:33:22.518 18:10:26 -- spdkcli/common.sh@34 -- # waitforlisten 1892160 00:33:22.518 18:10:26 -- common/autotest_common.sh@819 -- # '[' -z 1892160 ']' 00:33:22.518 18:10:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.518 18:10:26 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:22.518 18:10:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:22.518 18:10:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.518 18:10:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:22.518 18:10:26 -- common/autotest_common.sh@10 -- # set +x 00:33:22.779 [2024-07-22 18:10:26.815077] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:22.779 [2024-07-22 18:10:26.815150] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892160 ] 00:33:22.779 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.779 [2024-07-22 18:10:26.903057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:22.779 [2024-07-22 18:10:26.994195] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:22.779 [2024-07-22 18:10:26.994520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.779 [2024-07-22 18:10:26.994663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.720 18:10:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:23.720 18:10:27 -- common/autotest_common.sh@852 -- # return 0 00:33:23.720 18:10:27 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:23.720 18:10:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:23.720 18:10:27 -- common/autotest_common.sh@10 -- # set +x 00:33:23.720 18:10:27 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:23.720 18:10:27 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:23.720 18:10:27 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:23.720 18:10:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:23.720 18:10:27 -- common/autotest_common.sh@10 -- # set +x 00:33:23.720 18:10:27 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:23.720 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:23.720 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:23.720 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:23.720 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:23.720 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:23.720 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:23.720 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:23.720 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:23.720 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:23.720 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:23.720 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:23.720 ' 00:33:23.980 [2024-07-22 18:10:28.054191] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:33:25.891 [2024-07-22 18:10:30.053258] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.284 [2024-07-22 18:10:31.237203] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:29.194 [2024-07-22 18:10:33.411694] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:31.103 [2024-07-22 18:10:35.281411] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:32.512 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:32.512 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:32.512 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:32.512 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:32.512 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:32.512 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:32.512 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:32.512 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:32.512 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:32.512 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:32.512 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:32.513 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:32.772 18:10:36 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:32.772 18:10:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:32.772 18:10:36 -- common/autotest_common.sh@10 -- # set +x 00:33:32.772 18:10:36 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:32.772 18:10:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:32.772 18:10:36 -- common/autotest_common.sh@10 -- # set +x 00:33:32.772 18:10:36 -- spdkcli/nvmf.sh@69 -- # check_match 00:33:32.772 18:10:36 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:33.032 18:10:37 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:33.293 18:10:37 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:33.293 18:10:37 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:33.293 18:10:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:33.293 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.293 18:10:37 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:33.293 18:10:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:33.293 18:10:37 -- common/autotest_common.sh@10 -- # set +x 00:33:33.293 18:10:37 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:33.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:33.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:33.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:33.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:33.293 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:33.293 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:33.293 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:33.293 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:33.293 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:33.293 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:33.293 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:33.293 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:33.293 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:33.293 ' 00:33:38.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:38.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:38.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:38.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:38.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:38.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:38.577 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:38.577 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:38.577 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:38.577 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:38.577 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:38.577 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:38.577 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:38.577 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:38.577 18:10:42 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:38.577 18:10:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:38.577 18:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:38.577 18:10:42 -- spdkcli/nvmf.sh@90 -- # killprocess 1892160 00:33:38.577 18:10:42 -- common/autotest_common.sh@926 -- # '[' -z 1892160 ']' 00:33:38.577 18:10:42 -- common/autotest_common.sh@930 -- # kill -0 1892160 00:33:38.577 18:10:42 -- common/autotest_common.sh@931 -- # uname 00:33:38.577 18:10:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:38.577 18:10:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1892160 00:33:38.577 18:10:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:38.577 18:10:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:38.577 18:10:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1892160' 00:33:38.577 killing process with pid 1892160 00:33:38.577 18:10:42 -- common/autotest_common.sh@945 -- # kill 1892160 00:33:38.577 [2024-07-22 18:10:42.347262] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:33:38.577 18:10:42 -- common/autotest_common.sh@950 -- # wait 1892160 00:33:38.577 18:10:42 -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:38.577 18:10:42 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:38.577 18:10:42 -- spdkcli/common.sh@13 -- # '[' -n 1892160 ']' 00:33:38.577 18:10:42 -- spdkcli/common.sh@14 -- # killprocess 1892160 00:33:38.577 18:10:42 -- common/autotest_common.sh@926 -- # '[' -z 1892160 ']' 00:33:38.577 18:10:42 -- common/autotest_common.sh@930 -- # kill -0 1892160 00:33:38.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1892160) - No such process 00:33:38.577 18:10:42 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1892160 is not found' 00:33:38.577 Process with pid 1892160 is not found 00:33:38.577 18:10:42 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:38.577 18:10:42 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:38.577 18:10:42 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:38.577 00:33:38.577 real 0m15.840s 00:33:38.577 user 0m32.862s 00:33:38.577 sys 0m0.732s 00:33:38.577 18:10:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:38.577 18:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:38.577 ************************************ 00:33:38.577 END TEST spdkcli_nvmf_tcp 00:33:38.577 ************************************ 00:33:38.577 18:10:42 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:38.577 18:10:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:33:38.577 18:10:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:38.577 18:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:38.577 ************************************ 00:33:38.577 START TEST nvmf_identify_passthru 00:33:38.577 ************************************ 00:33:38.577 18:10:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:38.577 * Looking for test storage... 00:33:38.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:38.577 18:10:42 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:38.577 18:10:42 -- nvmf/common.sh@7 -- # uname -s 00:33:38.577 18:10:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.577 18:10:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.577 18:10:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.577 18:10:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.577 18:10:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.577 18:10:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.577 18:10:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.577 18:10:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.577 18:10:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.577 18:10:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.577 18:10:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:38.577 18:10:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:33:38.577 18:10:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.577 18:10:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.577 18:10:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.577 18:10:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.577 18:10:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.577 18:10:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.577 18:10:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.577 18:10:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.577 18:10:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.577 18:10:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.578 18:10:42 -- paths/export.sh@5 -- # export PATH 00:33:38.578 18:10:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.578 18:10:42 -- nvmf/common.sh@46 -- # : 0 00:33:38.578 18:10:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:38.578 18:10:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:38.578 18:10:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:38.578 18:10:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.578 18:10:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.578 18:10:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:38.578 18:10:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:38.578 18:10:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:38.578 18:10:42 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.578 18:10:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.578 18:10:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.578 18:10:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.578 18:10:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.578 18:10:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.578 18:10:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.578 18:10:42 -- paths/export.sh@5 -- # export PATH 00:33:38.578 18:10:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.578 18:10:42 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:38.578 18:10:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:38.578 18:10:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.578 18:10:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:38.578 18:10:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:38.578 18:10:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:38.578 18:10:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.578 18:10:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.578 18:10:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.578 18:10:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:38.578 18:10:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:38.578 18:10:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:38.578 18:10:42 -- common/autotest_common.sh@10 -- # set +x 00:33:46.725 18:10:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:46.725 18:10:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:46.725 18:10:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:46.725 18:10:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:46.725 18:10:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:46.725 18:10:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:46.725 18:10:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:46.725 18:10:50 -- nvmf/common.sh@294 -- # net_devs=() 00:33:46.725 18:10:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:46.725 18:10:50 -- nvmf/common.sh@295 -- # e810=() 00:33:46.725 18:10:50 -- nvmf/common.sh@295 -- # local -ga e810 00:33:46.725 18:10:50 -- nvmf/common.sh@296 -- # x722=() 00:33:46.725 18:10:50 -- nvmf/common.sh@296 -- # local -ga x722 00:33:46.725 18:10:50 -- nvmf/common.sh@297 -- # mlx=() 00:33:46.725 18:10:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:46.725 18:10:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.725 18:10:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:46.725 18:10:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:46.725 18:10:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:46.725 18:10:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:46.725 18:10:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:46.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:46.725 18:10:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:46.725 18:10:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:46.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:46.725 18:10:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:46.725 18:10:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:46.725 18:10:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.725 18:10:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:46.725 18:10:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.725 18:10:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:46.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:46.725 18:10:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.725 18:10:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:46.725 18:10:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.725 18:10:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:46.725 18:10:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.725 18:10:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:46.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:46.725 18:10:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.725 18:10:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:46.725 18:10:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:46.725 18:10:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:46.725 18:10:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:46.725 18:10:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.725 18:10:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.725 18:10:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.725 18:10:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:46.725 18:10:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.725 18:10:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.725 18:10:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:46.725 18:10:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.725 18:10:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.725 18:10:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:46.725 18:10:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:46.725 18:10:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.725 18:10:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.725 18:10:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.726 18:10:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.726 18:10:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:46.726 18:10:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.726 18:10:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.726 18:10:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.726 18:10:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:46.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.499 ms 00:33:46.726 00:33:46.726 --- 10.0.0.2 ping statistics --- 00:33:46.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.726 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:33:46.726 18:10:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:33:46.726 00:33:46.726 --- 10.0.0.1 ping statistics --- 00:33:46.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.726 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:33:46.726 18:10:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.726 18:10:50 -- nvmf/common.sh@410 -- # return 0 00:33:46.726 18:10:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:33:46.726 18:10:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.726 18:10:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:46.726 18:10:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:46.726 18:10:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.726 18:10:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:46.726 18:10:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:46.726 18:10:50 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:46.726 18:10:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:46.726 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:33:46.726 18:10:50 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:46.726 18:10:50 -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:46.726 18:10:50 -- common/autotest_common.sh@1509 -- # local bdfs 00:33:46.726 18:10:50 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:46.726 18:10:50 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:46.726 18:10:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:46.726 18:10:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:33:46.726 18:10:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:46.726 18:10:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:46.726 18:10:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:46.726 18:10:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:46.726 18:10:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:33:46.726 18:10:50 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:33:46.726 18:10:50 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:33:46.726 18:10:50 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:33:46.726 18:10:50 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:46.726 18:10:50 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:46.726 18:10:50 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:46.726 EAL: No free 2048 kB hugepages reported on node 1 00:33:52.016 18:10:55 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9512038S2P0BGN 00:33:52.016 18:10:55 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:33:52.016 18:10:55 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:52.016 18:10:55 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:52.016 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.300 18:11:00 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:57.300 18:11:00 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:57.300 18:11:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:57.300 18:11:00 -- common/autotest_common.sh@10 -- # set +x 00:33:57.300 18:11:01 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:57.300 18:11:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:57.300 18:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.300 18:11:01 -- target/identify_passthru.sh@31 -- # nvmfpid=1900458 00:33:57.300 18:11:01 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:57.300 18:11:01 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:57.300 18:11:01 -- target/identify_passthru.sh@35 -- # waitforlisten 1900458 00:33:57.300 18:11:01 -- common/autotest_common.sh@819 -- # '[' -z 1900458 ']' 00:33:57.300 18:11:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.300 18:11:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:57.300 18:11:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.300 18:11:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:57.300 18:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.300 [2024-07-22 18:11:01.078910] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:33:57.300 [2024-07-22 18:11:01.078962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.300 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.300 [2024-07-22 18:11:01.167872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:57.300 [2024-07-22 18:11:01.228809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:57.300 [2024-07-22 18:11:01.228933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.300 [2024-07-22 18:11:01.228942] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.300 [2024-07-22 18:11:01.228950] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.300 [2024-07-22 18:11:01.229049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.300 [2024-07-22 18:11:01.229165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:57.300 [2024-07-22 18:11:01.229299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:57.300 [2024-07-22 18:11:01.229301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.870 18:11:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:57.870 18:11:01 -- common/autotest_common.sh@852 -- # return 0 00:33:57.870 18:11:01 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:57.870 18:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.870 18:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.870 INFO: Log level set to 20 00:33:57.870 INFO: Requests: 00:33:57.870 { 00:33:57.870 "jsonrpc": "2.0", 00:33:57.870 "method": "nvmf_set_config", 00:33:57.870 "id": 1, 00:33:57.870 "params": { 00:33:57.870 "admin_cmd_passthru": { 00:33:57.870 "identify_ctrlr": true 00:33:57.870 } 00:33:57.870 } 00:33:57.870 } 00:33:57.870 00:33:57.870 INFO: response: 00:33:57.870 { 00:33:57.870 "jsonrpc": "2.0", 00:33:57.870 "id": 1, 00:33:57.871 "result": true 00:33:57.871 } 00:33:57.871 00:33:57.871 18:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.871 18:11:01 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:57.871 18:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.871 18:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.871 INFO: Setting log level to 20 00:33:57.871 INFO: Setting log level to 20 00:33:57.871 INFO: Log level set to 20 00:33:57.871 INFO: Log level set to 20 00:33:57.871 INFO: Requests: 00:33:57.871 { 00:33:57.871 "jsonrpc": "2.0", 00:33:57.871 "method": "framework_start_init", 00:33:57.871 "id": 1 00:33:57.871 } 00:33:57.871 00:33:57.871 INFO: Requests: 00:33:57.871 { 00:33:57.871 "jsonrpc": "2.0", 00:33:57.871 "method": "framework_start_init", 00:33:57.871 "id": 1 00:33:57.871 } 00:33:57.871 00:33:57.871 [2024-07-22 18:11:01.982740] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:57.871 INFO: response: 00:33:57.871 { 00:33:57.871 "jsonrpc": "2.0", 00:33:57.871 "id": 1, 00:33:57.871 "result": true 00:33:57.871 } 00:33:57.871 00:33:57.871 INFO: response: 00:33:57.871 { 00:33:57.871 "jsonrpc": "2.0", 00:33:57.871 "id": 1, 00:33:57.871 "result": true 00:33:57.871 } 00:33:57.871 00:33:57.871 18:11:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.871 18:11:01 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:57.871 18:11:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.871 18:11:01 -- common/autotest_common.sh@10 -- # set +x 00:33:57.871 INFO: Setting log level to 40 00:33:57.871 INFO: Setting log level to 40 00:33:57.871 INFO: Setting log level to 40 00:33:57.871 [2024-07-22 18:11:01.995920] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.871 18:11:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:57.871 18:11:02 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:57.871 18:11:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:57.871 18:11:02 -- common/autotest_common.sh@10 -- # set +x 00:33:57.871 18:11:02 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:33:57.871 18:11:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:57.871 18:11:02 -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 Nvme0n1 00:34:01.169 18:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.169 18:11:04 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:01.169 18:11:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.169 18:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 18:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.169 18:11:04 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:01.169 18:11:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.169 18:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 18:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.169 18:11:04 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.169 18:11:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.169 18:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 [2024-07-22 18:11:04.908872] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.169 18:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.169 18:11:04 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:01.169 18:11:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.169 18:11:04 -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 [2024-07-22 18:11:04.920682] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:34:01.169 [ 00:34:01.169 { 00:34:01.169 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:01.169 "subtype": "Discovery", 00:34:01.169 "listen_addresses": [], 00:34:01.169 "allow_any_host": true, 00:34:01.169 "hosts": [] 00:34:01.169 }, 00:34:01.169 { 00:34:01.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:01.169 "subtype": "NVMe", 00:34:01.169 "listen_addresses": [ 00:34:01.169 { 00:34:01.169 "transport": "TCP", 00:34:01.169 "trtype": "TCP", 00:34:01.169 "adrfam": "IPv4", 00:34:01.169 "traddr": "10.0.0.2", 00:34:01.169 "trsvcid": "4420" 00:34:01.169 } 00:34:01.169 ], 00:34:01.169 "allow_any_host": true, 00:34:01.169 "hosts": [], 00:34:01.169 "serial_number": "SPDK00000000000001", 00:34:01.169 "model_number": "SPDK bdev Controller", 00:34:01.169 "max_namespaces": 1, 00:34:01.169 "min_cntlid": 1, 00:34:01.169 "max_cntlid": 65519, 00:34:01.169 "namespaces": [ 00:34:01.169 { 00:34:01.169 "nsid": 1, 00:34:01.169 "bdev_name": "Nvme0n1", 00:34:01.169 "name": "Nvme0n1", 00:34:01.169 "nguid": "3F53E539B8E842A5B02CA8676BF2AF2F", 00:34:01.169 "uuid": "3f53e539-b8e8-42a5-b02c-a8676bf2af2f" 00:34:01.169 } 00:34:01.169 ] 00:34:01.169 } 00:34:01.169 ] 00:34:01.169 18:11:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.169 18:11:04 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:01.169 18:11:04 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:01.169 18:11:04 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:01.169 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.169 18:11:05 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9512038S2P0BGN 00:34:01.169 18:11:05 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:01.169 18:11:05 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:01.169 18:11:05 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:01.169 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.169 18:11:05 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:01.169 18:11:05 -- target/identify_passthru.sh@63 -- # '[' PHLJ9512038S2P0BGN '!=' PHLJ9512038S2P0BGN ']' 00:34:01.169 18:11:05 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:01.169 18:11:05 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:01.169 18:11:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:01.169 18:11:05 -- common/autotest_common.sh@10 -- # set +x 00:34:01.169 18:11:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:01.169 18:11:05 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:01.169 18:11:05 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:01.169 18:11:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:34:01.169 18:11:05 -- nvmf/common.sh@116 -- # sync 00:34:01.169 18:11:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:34:01.169 18:11:05 -- nvmf/common.sh@119 -- # set +e 00:34:01.169 18:11:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:34:01.169 18:11:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:34:01.169 rmmod nvme_tcp 00:34:01.169 rmmod nvme_fabrics 00:34:01.169 rmmod nvme_keyring 00:34:01.169 18:11:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:34:01.169 18:11:05 -- nvmf/common.sh@123 -- # set -e 00:34:01.169 18:11:05 -- nvmf/common.sh@124 -- # return 0 00:34:01.169 18:11:05 -- nvmf/common.sh@477 -- # '[' -n 1900458 ']' 00:34:01.169 18:11:05 -- nvmf/common.sh@478 -- # killprocess 1900458 00:34:01.169 18:11:05 -- common/autotest_common.sh@926 -- # '[' -z 1900458 ']' 00:34:01.169 18:11:05 -- common/autotest_common.sh@930 -- # kill -0 1900458 00:34:01.169 18:11:05 -- common/autotest_common.sh@931 -- # uname 00:34:01.169 18:11:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:34:01.169 18:11:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1900458 00:34:01.169 18:11:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:34:01.169 18:11:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:34:01.169 18:11:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1900458' 00:34:01.169 killing process with pid 1900458 00:34:01.169 18:11:05 -- common/autotest_common.sh@945 -- # kill 1900458 00:34:01.170 [2024-07-22 18:11:05.375867] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:34:01.170 18:11:05 -- common/autotest_common.sh@950 -- # wait 1900458 00:34:03.711 18:11:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:34:03.711 18:11:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:34:03.711 18:11:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:34:03.711 18:11:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:03.711 18:11:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:34:03.711 18:11:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.711 18:11:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:03.711 18:11:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.621 18:11:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:05.621 00:34:05.621 real 0m27.313s 00:34:05.621 user 0m36.248s 00:34:05.621 sys 0m6.784s 00:34:05.621 18:11:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:05.621 18:11:09 -- common/autotest_common.sh@10 -- # set +x 00:34:05.621 ************************************ 00:34:05.621 END TEST nvmf_identify_passthru 00:34:05.621 ************************************ 00:34:05.621 18:11:09 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:05.621 18:11:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:05.621 18:11:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:05.621 18:11:09 -- common/autotest_common.sh@10 -- # set +x 00:34:05.621 ************************************ 00:34:05.621 START TEST nvmf_dif 00:34:05.621 ************************************ 00:34:05.621 18:11:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:05.882 * Looking for test storage... 00:34:05.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.882 18:11:09 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.882 18:11:09 -- nvmf/common.sh@7 -- # uname -s 00:34:05.883 18:11:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.883 18:11:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.883 18:11:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.883 18:11:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.883 18:11:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.883 18:11:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.883 18:11:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.883 18:11:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.883 18:11:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.883 18:11:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.883 18:11:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:05.883 18:11:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:34:05.883 18:11:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.883 18:11:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.883 18:11:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.883 18:11:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.883 18:11:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.883 18:11:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.883 18:11:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.883 18:11:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.883 18:11:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.883 18:11:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.883 18:11:09 -- paths/export.sh@5 -- # export PATH 00:34:05.883 18:11:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.883 18:11:09 -- nvmf/common.sh@46 -- # : 0 00:34:05.883 18:11:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:34:05.883 18:11:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:34:05.883 18:11:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:34:05.883 18:11:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.883 18:11:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.883 18:11:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:34:05.883 18:11:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:34:05.883 18:11:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:34:05.883 18:11:09 -- target/dif.sh@15 -- # NULL_META=16 00:34:05.883 18:11:09 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:05.883 18:11:09 -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:05.883 18:11:09 -- target/dif.sh@15 -- # NULL_DIF=1 00:34:05.883 18:11:09 -- target/dif.sh@135 -- # nvmftestinit 00:34:05.883 18:11:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:34:05.883 18:11:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.883 18:11:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:34:05.883 18:11:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:34:05.883 18:11:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:34:05.883 18:11:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.883 18:11:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:05.883 18:11:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.883 18:11:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:34:05.883 18:11:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:34:05.883 18:11:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:34:05.883 18:11:10 -- common/autotest_common.sh@10 -- # set +x 00:34:14.023 18:11:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:34:14.023 18:11:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:34:14.023 18:11:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:34:14.023 18:11:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:34:14.023 18:11:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:34:14.023 18:11:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:34:14.023 18:11:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:34:14.023 18:11:17 -- nvmf/common.sh@294 -- # net_devs=() 00:34:14.023 18:11:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:34:14.023 18:11:17 -- nvmf/common.sh@295 -- # e810=() 00:34:14.023 18:11:17 -- nvmf/common.sh@295 -- # local -ga e810 00:34:14.023 18:11:17 -- nvmf/common.sh@296 -- # x722=() 00:34:14.023 18:11:17 -- nvmf/common.sh@296 -- # local -ga x722 00:34:14.023 18:11:17 -- nvmf/common.sh@297 -- # mlx=() 00:34:14.023 18:11:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:34:14.023 18:11:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.023 18:11:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:34:14.023 18:11:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:34:14.023 18:11:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:34:14.023 18:11:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:14.023 18:11:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:14.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:14.023 18:11:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:34:14.023 18:11:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:14.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:14.023 18:11:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:34:14.023 18:11:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:14.023 18:11:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.023 18:11:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:14.023 18:11:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.023 18:11:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:14.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:14.023 18:11:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.023 18:11:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:34:14.023 18:11:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.023 18:11:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:34:14.023 18:11:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.023 18:11:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:14.023 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:14.023 18:11:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.023 18:11:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:34:14.023 18:11:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:34:14.023 18:11:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:34:14.023 18:11:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:34:14.023 18:11:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.023 18:11:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.023 18:11:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.023 18:11:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:34:14.023 18:11:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.023 18:11:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.023 18:11:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:34:14.023 18:11:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.023 18:11:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.023 18:11:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:34:14.023 18:11:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:34:14.023 18:11:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.023 18:11:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.023 18:11:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.023 18:11:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.023 18:11:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:34:14.023 18:11:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.023 18:11:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.023 18:11:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.023 18:11:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:34:14.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:34:14.023 00:34:14.023 --- 10.0.0.2 ping statistics --- 00:34:14.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.023 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:34:14.023 18:11:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:34:14.023 00:34:14.023 --- 10.0.0.1 ping statistics --- 00:34:14.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.023 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:34:14.023 18:11:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.023 18:11:18 -- nvmf/common.sh@410 -- # return 0 00:34:14.023 18:11:18 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:34:14.023 18:11:18 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:18.229 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:18.229 0000:65:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:18.230 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:18.230 18:11:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.230 18:11:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:34:18.230 18:11:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:34:18.230 18:11:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.230 18:11:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:34:18.230 18:11:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:34:18.230 18:11:21 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:18.230 18:11:21 -- target/dif.sh@137 -- # nvmfappstart 00:34:18.230 18:11:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:34:18.230 18:11:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:18.230 18:11:21 -- common/autotest_common.sh@10 -- # set +x 00:34:18.230 18:11:21 -- nvmf/common.sh@469 -- # nvmfpid=1907465 00:34:18.230 18:11:21 -- nvmf/common.sh@470 -- # waitforlisten 1907465 00:34:18.230 18:11:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:18.230 18:11:21 -- common/autotest_common.sh@819 -- # '[' -z 1907465 ']' 00:34:18.230 18:11:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.230 18:11:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:34:18.230 18:11:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.230 18:11:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:34:18.230 18:11:21 -- common/autotest_common.sh@10 -- # set +x 00:34:18.230 [2024-07-22 18:11:22.006835] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:34:18.230 [2024-07-22 18:11:22.006901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.230 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.230 [2024-07-22 18:11:22.098506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.230 [2024-07-22 18:11:22.187116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:34:18.230 [2024-07-22 18:11:22.187268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.230 [2024-07-22 18:11:22.187278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.230 [2024-07-22 18:11:22.187285] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.230 [2024-07-22 18:11:22.187322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.802 18:11:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:34:18.802 18:11:22 -- common/autotest_common.sh@852 -- # return 0 00:34:18.802 18:11:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:34:18.802 18:11:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 18:11:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:18.802 18:11:22 -- target/dif.sh@139 -- # create_transport 00:34:18.802 18:11:22 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:18.802 18:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 [2024-07-22 18:11:22.906392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.802 18:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.802 18:11:22 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:18.802 18:11:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:18.802 18:11:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 ************************************ 00:34:18.802 START TEST fio_dif_1_default 00:34:18.802 ************************************ 00:34:18.802 18:11:22 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:34:18.802 18:11:22 -- target/dif.sh@86 -- # create_subsystems 0 00:34:18.802 18:11:22 -- target/dif.sh@28 -- # local sub 00:34:18.802 18:11:22 -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.802 18:11:22 -- target/dif.sh@31 -- # create_subsystem 0 00:34:18.802 18:11:22 -- target/dif.sh@18 -- # local sub_id=0 00:34:18.802 18:11:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:18.802 18:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 bdev_null0 00:34:18.802 18:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.802 18:11:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:18.802 18:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 18:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.802 18:11:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:18.802 18:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 18:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.802 18:11:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:18.802 18:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:18.802 18:11:22 -- common/autotest_common.sh@10 -- # set +x 00:34:18.802 [2024-07-22 18:11:22.962722] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.802 18:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:18.802 18:11:22 -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:18.802 18:11:22 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:18.802 18:11:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:18.802 18:11:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.802 18:11:22 -- nvmf/common.sh@520 -- # config=() 00:34:18.802 18:11:22 -- nvmf/common.sh@520 -- # local subsystem config 00:34:18.802 18:11:22 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.802 18:11:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:18.802 18:11:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:18.802 18:11:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:18.802 { 00:34:18.802 "params": { 00:34:18.802 "name": "Nvme$subsystem", 00:34:18.802 "trtype": "$TEST_TRANSPORT", 00:34:18.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.802 "adrfam": "ipv4", 00:34:18.802 "trsvcid": "$NVMF_PORT", 00:34:18.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.803 "hdgst": ${hdgst:-false}, 00:34:18.803 "ddgst": ${ddgst:-false} 00:34:18.803 }, 00:34:18.803 "method": "bdev_nvme_attach_controller" 00:34:18.803 } 00:34:18.803 EOF 00:34:18.803 )") 00:34:18.803 18:11:22 -- target/dif.sh@82 -- # gen_fio_conf 00:34:18.803 18:11:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:18.803 18:11:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:18.803 18:11:22 -- target/dif.sh@54 -- # local file 00:34:18.803 18:11:22 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.803 18:11:22 -- target/dif.sh@56 -- # cat 00:34:18.803 18:11:22 -- common/autotest_common.sh@1320 -- # shift 00:34:18.803 18:11:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:18.803 18:11:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.803 18:11:22 -- nvmf/common.sh@542 -- # cat 00:34:18.803 18:11:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:18.803 18:11:22 -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.803 18:11:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.803 18:11:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:18.803 18:11:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:18.803 18:11:22 -- nvmf/common.sh@544 -- # jq . 00:34:18.803 18:11:22 -- nvmf/common.sh@545 -- # IFS=, 00:34:18.803 18:11:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:18.803 "params": { 00:34:18.803 "name": "Nvme0", 00:34:18.803 "trtype": "tcp", 00:34:18.803 "traddr": "10.0.0.2", 00:34:18.803 "adrfam": "ipv4", 00:34:18.803 "trsvcid": "4420", 00:34:18.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.803 "hdgst": false, 00:34:18.803 "ddgst": false 00:34:18.803 }, 00:34:18.803 "method": "bdev_nvme_attach_controller" 00:34:18.803 }' 00:34:18.803 18:11:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:18.803 18:11:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:18.803 18:11:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.803 18:11:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.803 18:11:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:18.803 18:11:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:18.803 18:11:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:18.803 18:11:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:18.803 18:11:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:18.803 18:11:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:19.066 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:19.066 fio-3.35 00:34:19.066 Starting 1 thread 00:34:19.326 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.586 [2024-07-22 18:11:23.670299] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:19.586 [2024-07-22 18:11:23.670333] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:29.582 00:34:29.582 filename0: (groupid=0, jobs=1): err= 0: pid=1907954: Mon Jul 22 18:11:33 2024 00:34:29.582 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10008msec) 00:34:29.582 slat (nsec): min=3833, max=20055, avg=7380.50, stdev=513.46 00:34:29.582 clat (usec): min=604, max=43091, avg=20960.64, stdev=20122.56 00:34:29.582 lat (usec): min=611, max=43099, avg=20968.02, stdev=20122.53 00:34:29.582 clat percentiles (usec): 00:34:29.582 | 1.00th=[ 668], 5.00th=[ 840], 10.00th=[ 848], 20.00th=[ 865], 00:34:29.582 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[ 6259], 60.00th=[41157], 00:34:29.582 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:29.582 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:34:29.582 | 99.99th=[43254] 00:34:29.582 bw ( KiB/s): min= 704, max= 768, per=99.79%, avg=761.60, stdev=16.74, samples=20 00:34:29.582 iops : min= 176, max= 192, avg=190.40, stdev= 4.19, samples=20 00:34:29.582 lat (usec) : 750=2.31%, 1000=47.48% 00:34:29.582 lat (msec) : 2=0.10%, 10=0.21%, 50=49.90% 00:34:29.582 cpu : usr=95.42%, sys=4.38%, ctx=14, majf=0, minf=237 00:34:29.582 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.582 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.582 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:29.582 00:34:29.582 Run status group 0 (all jobs): 00:34:29.582 READ: bw=763KiB/s (781kB/s), 763KiB/s-763KiB/s (781kB/s-781kB/s), io=7632KiB (7815kB), run=10008-10008msec 00:34:29.848 18:11:33 -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:29.848 18:11:33 -- target/dif.sh@43 -- # local sub 00:34:29.848 18:11:33 -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.848 18:11:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.848 18:11:33 -- target/dif.sh@36 -- # local sub_id=0 00:34:29.848 18:11:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.848 18:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.848 18:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 00:34:29.848 real 0m11.018s 00:34:29.848 user 0m17.855s 00:34:29.848 sys 0m0.727s 00:34:29.848 18:11:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:29.848 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 ************************************ 00:34:29.848 END TEST fio_dif_1_default 00:34:29.848 ************************************ 00:34:29.848 18:11:33 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:29.848 18:11:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:29.848 18:11:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:29.848 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 ************************************ 00:34:29.848 START TEST fio_dif_1_multi_subsystems 00:34:29.848 ************************************ 00:34:29.848 18:11:33 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:34:29.848 18:11:33 -- target/dif.sh@92 -- # local files=1 00:34:29.848 18:11:33 -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:29.848 18:11:33 -- target/dif.sh@28 -- # local sub 00:34:29.848 18:11:33 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.848 18:11:33 -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.848 18:11:33 -- target/dif.sh@18 -- # local sub_id=0 00:34:29.848 18:11:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:29.848 18:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 bdev_null0 00:34:29.848 18:11:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.848 18:11:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:33 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.848 18:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.848 18:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 [2024-07-22 18:11:34.028806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.848 18:11:34 -- target/dif.sh@31 -- # create_subsystem 1 00:34:29.848 18:11:34 -- target/dif.sh@18 -- # local sub_id=1 00:34:29.848 18:11:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:29.848 18:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 bdev_null1 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:29.848 18:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:29.848 18:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.848 18:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:29.848 18:11:34 -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 18:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:29.848 18:11:34 -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:29.848 18:11:34 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:29.848 18:11:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:29.848 18:11:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.848 18:11:34 -- nvmf/common.sh@520 -- # config=() 00:34:29.848 18:11:34 -- nvmf/common.sh@520 -- # local subsystem config 00:34:29.848 18:11:34 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.848 18:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:29.848 18:11:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:29.848 18:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:29.848 { 00:34:29.848 "params": { 00:34:29.848 "name": "Nvme$subsystem", 00:34:29.848 "trtype": "$TEST_TRANSPORT", 00:34:29.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.848 "adrfam": "ipv4", 00:34:29.848 "trsvcid": "$NVMF_PORT", 00:34:29.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.848 "hdgst": ${hdgst:-false}, 00:34:29.848 "ddgst": ${ddgst:-false} 00:34:29.848 }, 00:34:29.848 "method": "bdev_nvme_attach_controller" 00:34:29.848 } 00:34:29.848 EOF 00:34:29.848 )") 00:34:29.848 18:11:34 -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.848 18:11:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.848 18:11:34 -- target/dif.sh@54 -- # local file 00:34:29.848 18:11:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:29.849 18:11:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.849 18:11:34 -- target/dif.sh@56 -- # cat 00:34:29.849 18:11:34 -- common/autotest_common.sh@1320 -- # shift 00:34:29.849 18:11:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:29.849 18:11:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.849 18:11:34 -- nvmf/common.sh@542 -- # cat 00:34:29.849 18:11:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.849 18:11:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.849 18:11:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:29.849 18:11:34 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.849 18:11:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:29.849 18:11:34 -- target/dif.sh@73 -- # cat 00:34:29.849 18:11:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:29.849 18:11:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:29.849 { 00:34:29.849 "params": { 00:34:29.849 "name": "Nvme$subsystem", 00:34:29.849 "trtype": "$TEST_TRANSPORT", 00:34:29.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.849 "adrfam": "ipv4", 00:34:29.849 "trsvcid": "$NVMF_PORT", 00:34:29.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.849 "hdgst": ${hdgst:-false}, 00:34:29.849 "ddgst": ${ddgst:-false} 00:34:29.849 }, 00:34:29.849 "method": "bdev_nvme_attach_controller" 00:34:29.849 } 00:34:29.849 EOF 00:34:29.849 )") 00:34:29.849 18:11:34 -- target/dif.sh@72 -- # (( file++ )) 00:34:29.849 18:11:34 -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.849 18:11:34 -- nvmf/common.sh@542 -- # cat 00:34:29.849 18:11:34 -- nvmf/common.sh@544 -- # jq . 00:34:29.849 18:11:34 -- nvmf/common.sh@545 -- # IFS=, 00:34:29.849 18:11:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:29.849 "params": { 00:34:29.849 "name": "Nvme0", 00:34:29.849 "trtype": "tcp", 00:34:29.849 "traddr": "10.0.0.2", 00:34:29.849 "adrfam": "ipv4", 00:34:29.849 "trsvcid": "4420", 00:34:29.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.849 "hdgst": false, 00:34:29.849 "ddgst": false 00:34:29.849 }, 00:34:29.849 "method": "bdev_nvme_attach_controller" 00:34:29.849 },{ 00:34:29.849 "params": { 00:34:29.849 "name": "Nvme1", 00:34:29.849 "trtype": "tcp", 00:34:29.849 "traddr": "10.0.0.2", 00:34:29.849 "adrfam": "ipv4", 00:34:29.849 "trsvcid": "4420", 00:34:29.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.849 "hdgst": false, 00:34:29.849 "ddgst": false 00:34:29.849 }, 00:34:29.849 "method": "bdev_nvme_attach_controller" 00:34:29.849 }' 00:34:29.849 18:11:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:29.849 18:11:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:29.849 18:11:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:30.210 18:11:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:30.210 18:11:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:30.210 18:11:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:30.210 18:11:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:30.210 18:11:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:30.210 18:11:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:30.210 18:11:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:30.490 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:30.490 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:30.490 fio-3.35 00:34:30.490 Starting 2 threads 00:34:30.490 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.061 [2024-07-22 18:11:35.146575] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:31.061 [2024-07-22 18:11:35.146619] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:41.072 00:34:41.072 filename0: (groupid=0, jobs=1): err= 0: pid=1909967: Mon Jul 22 18:11:45 2024 00:34:41.072 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:34:41.072 slat (nsec): min=7249, max=28220, avg=7535.88, stdev=727.19 00:34:41.072 clat (usec): min=40838, max=42073, avg=40991.82, stdev=120.58 00:34:41.072 lat (usec): min=40845, max=42102, avg=40999.35, stdev=120.85 00:34:41.072 clat percentiles (usec): 00:34:41.072 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:41.072 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:41.072 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:41.072 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:41.072 | 99.99th=[42206] 00:34:41.072 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:34:41.072 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:41.072 lat (msec) : 50=100.00% 00:34:41.072 cpu : usr=97.06%, sys=2.71%, ctx=27, majf=0, minf=88 00:34:41.072 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.072 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.072 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:41.072 filename1: (groupid=0, jobs=1): err= 0: pid=1909968: Mon Jul 22 18:11:45 2024 00:34:41.072 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10006msec) 00:34:41.072 slat (nsec): min=7257, max=25379, avg=7552.53, stdev=859.04 00:34:41.072 clat (usec): min=40797, max=42057, avg=40983.82, stdev=82.09 00:34:41.072 lat (usec): min=40804, max=42082, avg=40991.37, stdev=82.50 00:34:41.072 clat percentiles (usec): 00:34:41.072 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:41.072 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:41.072 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:41.072 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:34:41.072 | 99.99th=[42206] 00:34:41.072 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:34:41.072 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:41.072 lat (msec) : 50=100.00% 00:34:41.072 cpu : usr=97.14%, sys=2.64%, ctx=9, majf=0, minf=186 00:34:41.072 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.072 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.072 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.072 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:41.072 00:34:41.072 Run status group 0 (all jobs): 00:34:41.072 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-400kB/s), io=7808KiB (7995kB), run=10006-10008msec 00:34:41.334 18:11:45 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:41.334 18:11:45 -- target/dif.sh@43 -- # local sub 00:34:41.334 18:11:45 -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.334 18:11:45 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.334 18:11:45 -- target/dif.sh@36 -- # local sub_id=0 00:34:41.334 18:11:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.334 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.334 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.334 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.334 18:11:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.334 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.334 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.334 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.334 18:11:45 -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.334 18:11:45 -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:41.334 18:11:45 -- target/dif.sh@36 -- # local sub_id=1 00:34:41.334 18:11:45 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.334 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.334 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.334 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.334 18:11:45 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:41.334 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.334 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.335 00:34:41.335 real 0m11.469s 00:34:41.335 user 0m28.379s 00:34:41.335 sys 0m0.834s 00:34:41.335 18:11:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.335 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 ************************************ 00:34:41.335 END TEST fio_dif_1_multi_subsystems 00:34:41.335 ************************************ 00:34:41.335 18:11:45 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:41.335 18:11:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:34:41.335 18:11:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:34:41.335 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 ************************************ 00:34:41.335 START TEST fio_dif_rand_params 00:34:41.335 ************************************ 00:34:41.335 18:11:45 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:34:41.335 18:11:45 -- target/dif.sh@100 -- # local NULL_DIF 00:34:41.335 18:11:45 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:41.335 18:11:45 -- target/dif.sh@103 -- # NULL_DIF=3 00:34:41.335 18:11:45 -- target/dif.sh@103 -- # bs=128k 00:34:41.335 18:11:45 -- target/dif.sh@103 -- # numjobs=3 00:34:41.335 18:11:45 -- target/dif.sh@103 -- # iodepth=3 00:34:41.335 18:11:45 -- target/dif.sh@103 -- # runtime=5 00:34:41.335 18:11:45 -- target/dif.sh@105 -- # create_subsystems 0 00:34:41.335 18:11:45 -- target/dif.sh@28 -- # local sub 00:34:41.335 18:11:45 -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.335 18:11:45 -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.335 18:11:45 -- target/dif.sh@18 -- # local sub_id=0 00:34:41.335 18:11:45 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:41.335 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.335 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 bdev_null0 00:34:41.335 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.335 18:11:45 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.335 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.335 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.335 18:11:45 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.335 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.335 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.335 18:11:45 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.335 18:11:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:41.335 18:11:45 -- common/autotest_common.sh@10 -- # set +x 00:34:41.335 [2024-07-22 18:11:45.541883] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.335 18:11:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:41.335 18:11:45 -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:41.335 18:11:45 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:41.335 18:11:45 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:41.335 18:11:45 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.335 18:11:45 -- nvmf/common.sh@520 -- # config=() 00:34:41.335 18:11:45 -- nvmf/common.sh@520 -- # local subsystem config 00:34:41.335 18:11:45 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.335 18:11:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:41.335 18:11:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:41.335 18:11:45 -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.335 18:11:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:41.335 { 00:34:41.335 "params": { 00:34:41.335 "name": "Nvme$subsystem", 00:34:41.335 "trtype": "$TEST_TRANSPORT", 00:34:41.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.335 "adrfam": "ipv4", 00:34:41.335 "trsvcid": "$NVMF_PORT", 00:34:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.335 "hdgst": ${hdgst:-false}, 00:34:41.335 "ddgst": ${ddgst:-false} 00:34:41.335 }, 00:34:41.335 "method": "bdev_nvme_attach_controller" 00:34:41.335 } 00:34:41.335 EOF 00:34:41.335 )") 00:34:41.335 18:11:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.335 18:11:45 -- target/dif.sh@54 -- # local file 00:34:41.335 18:11:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:41.335 18:11:45 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.335 18:11:45 -- target/dif.sh@56 -- # cat 00:34:41.335 18:11:45 -- common/autotest_common.sh@1320 -- # shift 00:34:41.335 18:11:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:41.335 18:11:45 -- nvmf/common.sh@542 -- # cat 00:34:41.335 18:11:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:41.335 18:11:45 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:41.335 18:11:45 -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.335 18:11:45 -- nvmf/common.sh@544 -- # jq . 00:34:41.335 18:11:45 -- nvmf/common.sh@545 -- # IFS=, 00:34:41.335 18:11:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:41.335 "params": { 00:34:41.335 "name": "Nvme0", 00:34:41.335 "trtype": "tcp", 00:34:41.335 "traddr": "10.0.0.2", 00:34:41.335 "adrfam": "ipv4", 00:34:41.335 "trsvcid": "4420", 00:34:41.335 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.335 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.335 "hdgst": false, 00:34:41.335 "ddgst": false 00:34:41.335 }, 00:34:41.335 "method": "bdev_nvme_attach_controller" 00:34:41.335 }' 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:41.335 18:11:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:41.335 18:11:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:41.335 18:11:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:41.600 18:11:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:41.600 18:11:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:41.600 18:11:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.600 18:11:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.861 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:41.861 ... 00:34:41.861 fio-3.35 00:34:41.861 Starting 3 threads 00:34:41.861 EAL: No free 2048 kB hugepages reported on node 1 00:34:42.428 [2024-07-22 18:11:46.404090] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:42.428 [2024-07-22 18:11:46.404130] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:34:47.699 00:34:47.699 filename0: (groupid=0, jobs=1): err= 0: pid=1911981: Mon Jul 22 18:11:51 2024 00:34:47.699 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5048msec) 00:34:47.699 slat (nsec): min=7220, max=31199, avg=8062.62, stdev=1505.56 00:34:47.699 clat (usec): min=5786, max=90045, avg=12847.55, stdev=11104.25 00:34:47.699 lat (usec): min=5794, max=90053, avg=12855.61, stdev=11104.43 00:34:47.699 clat percentiles (usec): 00:34:47.699 | 1.00th=[ 6063], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8291], 00:34:47.699 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10290], 00:34:47.699 | 70.00th=[10945], 80.00th=[11863], 90.00th=[13566], 95.00th=[47973], 00:34:47.699 | 99.00th=[51119], 99.50th=[52691], 99.90th=[87557], 99.95th=[89654], 00:34:47.699 | 99.99th=[89654] 00:34:47.699 bw ( KiB/s): min=21504, max=35840, per=31.20%, avg=30003.20, stdev=4973.99, samples=10 00:34:47.699 iops : min= 168, max= 280, avg=234.40, stdev=38.86, samples=10 00:34:47.699 lat (msec) : 10=56.98%, 20=35.09%, 50=5.71%, 100=2.21% 00:34:47.699 cpu : usr=95.80%, sys=3.94%, ctx=12, majf=0, minf=60 00:34:47.699 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.699 issued rwts: total=1174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:47.699 filename0: (groupid=0, jobs=1): err= 0: pid=1911982: Mon Jul 22 18:11:51 2024 00:34:47.699 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(184MiB/5044msec) 00:34:47.699 slat (nsec): min=7227, max=30037, avg=8585.41, stdev=1226.36 00:34:47.699 clat (usec): min=4362, max=88049, avg=10260.86, stdev=7482.22 00:34:47.699 lat (usec): min=4370, max=88057, avg=10269.44, stdev=7482.21 00:34:47.699 clat percentiles (usec): 00:34:47.699 | 1.00th=[ 4817], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 7177], 00:34:47.699 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:34:47.699 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11863], 95.00th=[12649], 00:34:47.699 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50594], 99.95th=[87557], 00:34:47.699 | 99.99th=[87557] 00:34:47.699 bw ( KiB/s): min=25344, max=46848, per=39.06%, avg=37555.20, stdev=6329.70, samples=10 00:34:47.699 iops : min= 198, max= 366, avg=293.40, stdev=49.45, samples=10 00:34:47.699 lat (msec) : 10=64.53%, 20=32.13%, 50=2.86%, 100=0.48% 00:34:47.699 cpu : usr=95.48%, sys=4.24%, ctx=9, majf=0, minf=141 00:34:47.699 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.699 issued rwts: total=1469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:47.699 filename0: (groupid=0, jobs=1): err= 0: pid=1911983: Mon Jul 22 18:11:51 2024 00:34:47.699 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5007msec) 00:34:47.699 slat (nsec): min=7211, max=32502, avg=8156.51, stdev=1371.45 00:34:47.699 clat (usec): min=4982, max=92075, avg=13061.33, stdev=10440.79 00:34:47.699 lat (usec): min=4990, max=92084, avg=13069.49, stdev=10440.76 00:34:47.699 clat percentiles (usec): 00:34:47.699 | 1.00th=[ 5342], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8094], 00:34:47.699 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10683], 60.00th=[11469], 00:34:47.699 | 70.00th=[12518], 80.00th=[13435], 90.00th=[15008], 95.00th=[47973], 00:34:47.699 | 99.00th=[52167], 99.50th=[54264], 99.90th=[89654], 99.95th=[91751], 00:34:47.699 | 99.99th=[91751] 00:34:47.699 bw ( KiB/s): min=18944, max=38144, per=30.51%, avg=29337.60, stdev=6888.47, samples=10 00:34:47.699 iops : min= 148, max= 298, avg=229.20, stdev=53.82, samples=10 00:34:47.699 lat (msec) : 10=38.38%, 20=55.35%, 50=3.83%, 100=2.44% 00:34:47.699 cpu : usr=94.09%, sys=4.85%, ctx=368, majf=0, minf=103 00:34:47.699 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.699 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:47.699 00:34:47.699 Run status group 0 (all jobs): 00:34:47.699 READ: bw=93.9MiB/s (98.5MB/s), 28.7MiB/s-36.4MiB/s (30.1MB/s-38.2MB/s), io=474MiB (497MB), run=5007-5048msec 00:34:47.699 18:11:51 -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:47.699 18:11:51 -- target/dif.sh@43 -- # local sub 00:34:47.699 18:11:51 -- target/dif.sh@45 -- # for sub in "$@" 00:34:47.699 18:11:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:47.699 18:11:51 -- target/dif.sh@36 -- # local sub_id=0 00:34:47.699 18:11:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@109 -- # NULL_DIF=2 00:34:47.699 18:11:51 -- target/dif.sh@109 -- # bs=4k 00:34:47.699 18:11:51 -- target/dif.sh@109 -- # numjobs=8 00:34:47.699 18:11:51 -- target/dif.sh@109 -- # iodepth=16 00:34:47.699 18:11:51 -- target/dif.sh@109 -- # runtime= 00:34:47.699 18:11:51 -- target/dif.sh@109 -- # files=2 00:34:47.699 18:11:51 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:47.699 18:11:51 -- target/dif.sh@28 -- # local sub 00:34:47.699 18:11:51 -- target/dif.sh@30 -- # for sub in "$@" 00:34:47.699 18:11:51 -- target/dif.sh@31 -- # create_subsystem 0 00:34:47.699 18:11:51 -- target/dif.sh@18 -- # local sub_id=0 00:34:47.699 18:11:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 bdev_null0 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 [2024-07-22 18:11:51.765490] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@30 -- # for sub in "$@" 00:34:47.699 18:11:51 -- target/dif.sh@31 -- # create_subsystem 1 00:34:47.699 18:11:51 -- target/dif.sh@18 -- # local sub_id=1 00:34:47.699 18:11:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 bdev_null1 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.699 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.699 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.699 18:11:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:47.699 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.700 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.700 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.700 18:11:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:47.700 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.700 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.700 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.700 18:11:51 -- target/dif.sh@30 -- # for sub in "$@" 00:34:47.700 18:11:51 -- target/dif.sh@31 -- # create_subsystem 2 00:34:47.700 18:11:51 -- target/dif.sh@18 -- # local sub_id=2 00:34:47.700 18:11:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:47.700 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.700 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.700 bdev_null2 00:34:47.700 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.700 18:11:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:47.700 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.700 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.700 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.700 18:11:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:47.700 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.700 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.700 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.700 18:11:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:47.700 18:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:34:47.700 18:11:51 -- common/autotest_common.sh@10 -- # set +x 00:34:47.700 18:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:34:47.700 18:11:51 -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:47.700 18:11:51 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:47.700 18:11:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:47.700 18:11:51 -- target/dif.sh@82 -- # gen_fio_conf 00:34:47.700 18:11:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.700 18:11:51 -- target/dif.sh@54 -- # local file 00:34:47.700 18:11:51 -- target/dif.sh@56 -- # cat 00:34:47.700 18:11:51 -- nvmf/common.sh@520 -- # config=() 00:34:47.700 18:11:51 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:47.700 18:11:51 -- nvmf/common.sh@520 -- # local subsystem config 00:34:47.700 18:11:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:34:47.700 18:11:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:47.700 18:11:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:47.700 { 00:34:47.700 "params": { 00:34:47.700 "name": "Nvme$subsystem", 00:34:47.700 "trtype": "$TEST_TRANSPORT", 00:34:47.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.700 "adrfam": "ipv4", 00:34:47.700 "trsvcid": "$NVMF_PORT", 00:34:47.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.700 "hdgst": ${hdgst:-false}, 00:34:47.700 "ddgst": ${ddgst:-false} 00:34:47.700 }, 00:34:47.700 "method": "bdev_nvme_attach_controller" 00:34:47.700 } 00:34:47.700 EOF 00:34:47.700 )") 00:34:47.700 18:11:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:47.700 18:11:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:34:47.700 18:11:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:47.700 18:11:51 -- common/autotest_common.sh@1320 -- # shift 00:34:47.700 18:11:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:34:47.700 18:11:51 -- nvmf/common.sh@542 -- # cat 00:34:47.700 18:11:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:47.700 18:11:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:34:47.700 18:11:51 -- target/dif.sh@72 -- # (( file <= files )) 00:34:47.700 18:11:51 -- target/dif.sh@73 -- # cat 00:34:47.700 18:11:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:47.700 18:11:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:47.700 { 00:34:47.700 "params": { 00:34:47.700 "name": "Nvme$subsystem", 00:34:47.700 "trtype": "$TEST_TRANSPORT", 00:34:47.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.700 "adrfam": "ipv4", 00:34:47.700 "trsvcid": "$NVMF_PORT", 00:34:47.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.700 "hdgst": ${hdgst:-false}, 00:34:47.700 "ddgst": ${ddgst:-false} 00:34:47.700 }, 00:34:47.700 "method": "bdev_nvme_attach_controller" 00:34:47.700 } 00:34:47.700 EOF 00:34:47.700 )") 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:47.700 18:11:51 -- nvmf/common.sh@542 -- # cat 00:34:47.700 18:11:51 -- target/dif.sh@72 -- # (( file++ )) 00:34:47.700 18:11:51 -- target/dif.sh@72 -- # (( file <= files )) 00:34:47.700 18:11:51 -- target/dif.sh@73 -- # cat 00:34:47.700 18:11:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:34:47.700 18:11:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:34:47.700 { 00:34:47.700 "params": { 00:34:47.700 "name": "Nvme$subsystem", 00:34:47.700 "trtype": "$TEST_TRANSPORT", 00:34:47.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:47.700 "adrfam": "ipv4", 00:34:47.700 "trsvcid": "$NVMF_PORT", 00:34:47.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:47.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:47.700 "hdgst": ${hdgst:-false}, 00:34:47.700 "ddgst": ${ddgst:-false} 00:34:47.700 }, 00:34:47.700 "method": "bdev_nvme_attach_controller" 00:34:47.700 } 00:34:47.700 EOF 00:34:47.700 )") 00:34:47.700 18:11:51 -- nvmf/common.sh@542 -- # cat 00:34:47.700 18:11:51 -- target/dif.sh@72 -- # (( file++ )) 00:34:47.700 18:11:51 -- target/dif.sh@72 -- # (( file <= files )) 00:34:47.700 18:11:51 -- nvmf/common.sh@544 -- # jq . 00:34:47.700 18:11:51 -- nvmf/common.sh@545 -- # IFS=, 00:34:47.700 18:11:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:34:47.700 "params": { 00:34:47.700 "name": "Nvme0", 00:34:47.700 "trtype": "tcp", 00:34:47.700 "traddr": "10.0.0.2", 00:34:47.700 "adrfam": "ipv4", 00:34:47.700 "trsvcid": "4420", 00:34:47.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:47.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:47.700 "hdgst": false, 00:34:47.700 "ddgst": false 00:34:47.700 }, 00:34:47.700 "method": "bdev_nvme_attach_controller" 00:34:47.700 },{ 00:34:47.700 "params": { 00:34:47.700 "name": "Nvme1", 00:34:47.700 "trtype": "tcp", 00:34:47.700 "traddr": "10.0.0.2", 00:34:47.700 "adrfam": "ipv4", 00:34:47.700 "trsvcid": "4420", 00:34:47.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:47.700 "hdgst": false, 00:34:47.700 "ddgst": false 00:34:47.700 }, 00:34:47.700 "method": "bdev_nvme_attach_controller" 00:34:47.700 },{ 00:34:47.700 "params": { 00:34:47.700 "name": "Nvme2", 00:34:47.700 "trtype": "tcp", 00:34:47.700 "traddr": "10.0.0.2", 00:34:47.700 "adrfam": "ipv4", 00:34:47.700 "trsvcid": "4420", 00:34:47.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:47.700 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:47.700 "hdgst": false, 00:34:47.700 "ddgst": false 00:34:47.700 }, 00:34:47.700 "method": "bdev_nvme_attach_controller" 00:34:47.700 }' 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:47.700 18:11:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:47.700 18:11:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:34:47.700 18:11:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:34:47.700 18:11:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:34:47.700 18:11:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:47.700 18:11:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.269 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:48.269 ... 00:34:48.269 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:48.269 ... 00:34:48.269 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:48.269 ... 00:34:48.269 fio-3.35 00:34:48.269 Starting 24 threads 00:34:48.269 EAL: No free 2048 kB hugepages reported on node 1 00:34:48.848 [2024-07-22 18:11:53.006256] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:34:48.848 [2024-07-22 18:11:53.006295] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:01.084 00:35:01.084 filename0: (groupid=0, jobs=1): err= 0: pid=1913328: Mon Jul 22 18:12:03 2024 00:35:01.084 read: IOPS=575, BW=2300KiB/s (2356kB/s)(22.5MiB/10016msec) 00:35:01.084 slat (nsec): min=7233, max=74390, avg=10633.53, stdev=5858.54 00:35:01.084 clat (usec): min=1556, max=34517, avg=27725.94, stdev=3579.62 00:35:01.084 lat (usec): min=1574, max=34526, avg=27736.58, stdev=3578.56 00:35:01.084 clat percentiles (usec): 00:35:01.084 | 1.00th=[ 3425], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:35:01.084 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.084 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:35:01.084 | 99.00th=[32900], 99.50th=[32900], 99.90th=[34341], 99.95th=[34341], 00:35:01.084 | 99.99th=[34341] 00:35:01.084 bw ( KiB/s): min= 2171, max= 3072, per=4.24%, avg=2303.74, stdev=195.71, samples=19 00:35:01.084 iops : min= 542, max= 768, avg=575.89, stdev=48.96, samples=19 00:35:01.084 lat (msec) : 2=0.31%, 4=0.97%, 10=0.66%, 20=0.28%, 50=97.78% 00:35:01.084 cpu : usr=99.12%, sys=0.55%, ctx=38, majf=0, minf=68 00:35:01.084 IO depths : 1=3.9%, 2=10.0%, 4=24.8%, 8=52.7%, 16=8.6%, 32=0.0%, >=64=0.0% 00:35:01.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.084 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.084 issued rwts: total=5760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.084 filename0: (groupid=0, jobs=1): err= 0: pid=1913329: Mon Jul 22 18:12:03 2024 00:35:01.084 read: IOPS=565, BW=2263KiB/s (2318kB/s)(22.1MiB/10010msec) 00:35:01.084 slat (usec): min=7, max=104, avg=15.89, stdev= 9.19 00:35:01.084 clat (usec): min=8019, max=51264, avg=28145.42, stdev=2954.98 00:35:01.084 lat (usec): min=8033, max=51279, avg=28161.32, stdev=2954.63 00:35:01.084 clat percentiles (usec): 00:35:01.084 | 1.00th=[10421], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:35:01.084 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.084 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:01.084 | 99.00th=[34341], 99.50th=[47973], 99.90th=[51119], 99.95th=[51119], 00:35:01.084 | 99.99th=[51119] 00:35:01.084 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.32, stdev=60.32, samples=19 00:35:01.084 iops : min= 544, max= 576, avg=564.00, stdev=15.10, samples=19 00:35:01.084 lat (msec) : 10=0.95%, 20=0.28%, 50=98.62%, 100=0.14% 00:35:01.084 cpu : usr=99.23%, sys=0.48%, ctx=14, majf=0, minf=47 00:35:01.084 IO depths : 1=2.6%, 2=8.8%, 4=24.8%, 8=53.9%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:01.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.084 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.084 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.084 filename0: (groupid=0, jobs=1): err= 0: pid=1913330: Mon Jul 22 18:12:03 2024 00:35:01.084 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.1MiB/10013msec) 00:35:01.084 slat (usec): min=5, max=120, avg=45.72, stdev=18.13 00:35:01.084 clat (usec): min=24700, max=42652, avg=27961.69, stdev=860.59 00:35:01.084 lat (usec): min=24709, max=42667, avg=28007.41, stdev=859.11 00:35:01.084 clat percentiles (usec): 00:35:01.084 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.084 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.084 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.084 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42730], 99.95th=[42730], 00:35:01.084 | 99.99th=[42730] 00:35:01.084 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.79, stdev=62.96, samples=19 00:35:01.084 iops : min= 544, max= 576, avg=564.16, stdev=15.71, samples=19 00:35:01.084 lat (msec) : 50=100.00% 00:35:01.084 cpu : usr=99.18%, sys=0.49%, ctx=85, majf=0, minf=44 00:35:01.084 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.084 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.084 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.084 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename0: (groupid=0, jobs=1): err= 0: pid=1913331: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=565, BW=2261KiB/s (2316kB/s)(22.1MiB/10019msec) 00:35:01.085 slat (usec): min=7, max=133, avg=16.06, stdev=18.89 00:35:01.085 clat (usec): min=20529, max=30653, avg=28181.98, stdev=481.14 00:35:01.085 lat (usec): min=20569, max=30661, avg=28198.04, stdev=476.24 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[27395], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:35:01.085 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.085 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:35:01.085 | 99.00th=[28967], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:35:01.085 | 99.99th=[30540] 00:35:01.085 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2261.75, stdev=59.74, samples=20 00:35:01.085 iops : min= 544, max= 576, avg=565.40, stdev=14.91, samples=20 00:35:01.085 lat (msec) : 50=100.00% 00:35:01.085 cpu : usr=99.09%, sys=0.60%, ctx=36, majf=0, minf=57 00:35:01.085 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename0: (groupid=0, jobs=1): err= 0: pid=1913333: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=564, BW=2257KiB/s (2311kB/s)(22.1MiB/10009msec) 00:35:01.085 slat (usec): min=7, max=114, avg=45.41, stdev=18.35 00:35:01.085 clat (usec): min=24715, max=38765, avg=27934.09, stdev=668.23 00:35:01.085 lat (usec): min=24729, max=38788, avg=27979.50, stdev=668.50 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.085 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.085 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.085 | 99.00th=[28967], 99.50th=[28967], 99.90th=[38536], 99.95th=[38536], 00:35:01.085 | 99.99th=[38536] 00:35:01.085 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.58, stdev=63.24, samples=19 00:35:01.085 iops : min= 544, max= 576, avg=564.11, stdev=15.78, samples=19 00:35:01.085 lat (msec) : 50=100.00% 00:35:01.085 cpu : usr=98.64%, sys=0.77%, ctx=117, majf=0, minf=49 00:35:01.085 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename0: (groupid=0, jobs=1): err= 0: pid=1913334: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=565, BW=2262KiB/s (2316kB/s)(22.1MiB/10018msec) 00:35:01.085 slat (usec): min=5, max=129, avg=44.86, stdev=18.97 00:35:01.085 clat (usec): min=17244, max=33192, avg=27903.84, stdev=725.89 00:35:01.085 lat (usec): min=17251, max=33200, avg=27948.70, stdev=726.57 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.085 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.085 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:01.085 | 99.00th=[28967], 99.50th=[28967], 99.90th=[33162], 99.95th=[33162], 00:35:01.085 | 99.99th=[33162] 00:35:01.085 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2263.05, stdev=60.78, samples=19 00:35:01.085 iops : min= 544, max= 576, avg=565.68, stdev=15.15, samples=19 00:35:01.085 lat (msec) : 20=0.28%, 50=99.72% 00:35:01.085 cpu : usr=99.13%, sys=0.50%, ctx=45, majf=0, minf=57 00:35:01.085 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename0: (groupid=0, jobs=1): err= 0: pid=1913335: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=566, BW=2265KiB/s (2319kB/s)(22.1MiB/10003msec) 00:35:01.085 slat (usec): min=6, max=158, avg=33.20, stdev=18.51 00:35:01.085 clat (usec): min=5849, max=43722, avg=27999.71, stdev=1841.16 00:35:01.085 lat (usec): min=5858, max=43739, avg=28032.90, stdev=1841.36 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[25560], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:35:01.085 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:35:01.085 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:01.085 | 99.00th=[30802], 99.50th=[31327], 99.90th=[43779], 99.95th=[43779], 00:35:01.085 | 99.99th=[43779] 00:35:01.085 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2256.53, stdev=60.18, samples=19 00:35:01.085 iops : min= 542, max= 576, avg=564.05, stdev=15.09, samples=19 00:35:01.085 lat (msec) : 10=0.41%, 20=0.26%, 50=99.33% 00:35:01.085 cpu : usr=99.09%, sys=0.62%, ctx=14, majf=0, minf=36 00:35:01.085 IO depths : 1=2.2%, 2=8.3%, 4=24.3%, 8=54.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename0: (groupid=0, jobs=1): err= 0: pid=1913336: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=580, BW=2321KiB/s (2377kB/s)(22.7MiB/10003msec) 00:35:01.085 slat (usec): min=6, max=123, avg=42.94, stdev=25.06 00:35:01.085 clat (usec): min=6136, max=43688, avg=27169.86, stdev=3141.11 00:35:01.085 lat (usec): min=6148, max=43703, avg=27212.80, stdev=3148.81 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[15139], 5.00th=[18482], 10.00th=[27395], 20.00th=[27657], 00:35:01.085 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:35:01.085 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.085 | 99.00th=[30802], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:35:01.085 | 99.99th=[43779] 00:35:01.085 bw ( KiB/s): min= 2171, max= 3008, per=4.23%, avg=2299.47, stdev=181.68, samples=19 00:35:01.085 iops : min= 542, max= 752, avg=574.79, stdev=45.45, samples=19 00:35:01.085 lat (msec) : 10=0.41%, 20=5.03%, 50=94.56% 00:35:01.085 cpu : usr=97.52%, sys=1.37%, ctx=259, majf=0, minf=48 00:35:01.085 IO depths : 1=4.2%, 2=9.8%, 4=22.7%, 8=54.8%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename1: (groupid=0, jobs=1): err= 0: pid=1913337: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10020msec) 00:35:01.085 slat (usec): min=6, max=138, avg=45.30, stdev=24.72 00:35:01.085 clat (usec): min=20616, max=31129, avg=27949.82, stdev=530.59 00:35:01.085 lat (usec): min=20625, max=31177, avg=27995.12, stdev=525.95 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:35:01.085 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:35:01.085 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:01.085 | 99.00th=[28967], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:35:01.085 | 99.99th=[31065] 00:35:01.085 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2258.95, stdev=62.46, samples=20 00:35:01.085 iops : min= 544, max= 576, avg=564.70, stdev=15.59, samples=20 00:35:01.085 lat (msec) : 50=100.00% 00:35:01.085 cpu : usr=98.36%, sys=0.90%, ctx=373, majf=0, minf=57 00:35:01.085 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename1: (groupid=0, jobs=1): err= 0: pid=1913338: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=565, BW=2261KiB/s (2316kB/s)(22.1MiB/10019msec) 00:35:01.085 slat (nsec): min=5609, max=77016, avg=14314.29, stdev=10363.78 00:35:01.085 clat (usec): min=17351, max=32956, avg=28189.74, stdev=748.73 00:35:01.085 lat (usec): min=17359, max=32970, avg=28204.06, stdev=747.82 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[25560], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:35:01.085 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.085 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:35:01.085 | 99.00th=[29230], 99.50th=[31327], 99.90th=[32900], 99.95th=[32900], 00:35:01.085 | 99.99th=[32900] 00:35:01.085 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2262.95, stdev=59.35, samples=20 00:35:01.085 iops : min= 544, max= 576, avg=565.70, stdev=14.81, samples=20 00:35:01.085 lat (msec) : 20=0.25%, 50=99.75% 00:35:01.085 cpu : usr=98.73%, sys=0.95%, ctx=21, majf=0, minf=52 00:35:01.085 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:01.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.085 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.085 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.085 filename1: (groupid=0, jobs=1): err= 0: pid=1913339: Mon Jul 22 18:12:03 2024 00:35:01.085 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.1MiB/10002msec) 00:35:01.085 slat (usec): min=7, max=126, avg=44.29, stdev=19.20 00:35:01.085 clat (usec): min=10568, max=49471, avg=27910.86, stdev=1501.45 00:35:01.085 lat (usec): min=10588, max=49490, avg=27955.15, stdev=1501.59 00:35:01.085 clat percentiles (usec): 00:35:01.085 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.085 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.085 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.085 | 99.00th=[28967], 99.50th=[28967], 99.90th=[49546], 99.95th=[49546], 00:35:01.085 | 99.99th=[49546] 00:35:01.085 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2256.84, stdev=76.45, samples=19 00:35:01.086 iops : min= 512, max= 576, avg=564.21, stdev=19.11, samples=19 00:35:01.086 lat (msec) : 20=0.28%, 50=99.72% 00:35:01.086 cpu : usr=99.19%, sys=0.51%, ctx=13, majf=0, minf=40 00:35:01.086 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename1: (groupid=0, jobs=1): err= 0: pid=1913340: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=563, BW=2256KiB/s (2310kB/s)(22.1MiB/10015msec) 00:35:01.086 slat (usec): min=5, max=126, avg=46.64, stdev=19.02 00:35:01.086 clat (usec): min=22995, max=50744, avg=27951.06, stdev=1018.17 00:35:01.086 lat (usec): min=23003, max=50760, avg=27997.70, stdev=1016.90 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.086 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.086 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.086 | 99.00th=[28967], 99.50th=[28967], 99.90th=[45351], 99.95th=[45351], 00:35:01.086 | 99.99th=[50594] 00:35:01.086 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.32, stdev=63.04, samples=19 00:35:01.086 iops : min= 544, max= 576, avg=564.00, stdev=15.71, samples=19 00:35:01.086 lat (msec) : 50=99.96%, 100=0.04% 00:35:01.086 cpu : usr=99.21%, sys=0.48%, ctx=51, majf=0, minf=31 00:35:01.086 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename1: (groupid=0, jobs=1): err= 0: pid=1913341: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.1MiB/10013msec) 00:35:01.086 slat (usec): min=5, max=123, avg=44.34, stdev=18.84 00:35:01.086 clat (usec): min=22925, max=48561, avg=27941.73, stdev=898.15 00:35:01.086 lat (usec): min=22932, max=48578, avg=27986.07, stdev=898.24 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.086 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.086 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.086 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42730], 99.95th=[42730], 00:35:01.086 | 99.99th=[48497] 00:35:01.086 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.58, stdev=63.24, samples=19 00:35:01.086 iops : min= 544, max= 576, avg=564.11, stdev=15.78, samples=19 00:35:01.086 lat (msec) : 50=100.00% 00:35:01.086 cpu : usr=99.25%, sys=0.47%, ctx=15, majf=0, minf=44 00:35:01.086 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename1: (groupid=0, jobs=1): err= 0: pid=1913342: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=571, BW=2286KiB/s (2340kB/s)(22.4MiB/10014msec) 00:35:01.086 slat (usec): min=7, max=100, avg=12.45, stdev= 9.54 00:35:01.086 clat (usec): min=3708, max=36749, avg=27898.45, stdev=2755.24 00:35:01.086 lat (usec): min=3722, max=36756, avg=27910.90, stdev=2754.46 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[ 5800], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:35:01.086 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.086 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:35:01.086 | 99.00th=[29230], 99.50th=[32375], 99.90th=[33162], 99.95th=[33162], 00:35:01.086 | 99.99th=[36963] 00:35:01.086 bw ( KiB/s): min= 2176, max= 2768, per=4.21%, avg=2287.74, stdev=130.17, samples=19 00:35:01.086 iops : min= 544, max= 692, avg=571.89, stdev=32.54, samples=19 00:35:01.086 lat (msec) : 4=0.30%, 10=1.00%, 20=0.28%, 50=98.43% 00:35:01.086 cpu : usr=98.86%, sys=0.76%, ctx=72, majf=0, minf=42 00:35:01.086 IO depths : 1=5.7%, 2=11.8%, 4=24.6%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename1: (groupid=0, jobs=1): err= 0: pid=1913343: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.1MiB/10003msec) 00:35:01.086 slat (nsec): min=7249, max=95703, avg=16030.93, stdev=9112.52 00:35:01.086 clat (usec): min=24922, max=38319, avg=28191.25, stdev=613.57 00:35:01.086 lat (usec): min=24935, max=38344, avg=28207.28, stdev=612.81 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[27919], 00:35:01.086 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.086 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:01.086 | 99.00th=[28967], 99.50th=[28967], 99.90th=[38011], 99.95th=[38536], 00:35:01.086 | 99.99th=[38536] 00:35:01.086 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2256.58, stdev=63.80, samples=19 00:35:01.086 iops : min= 542, max= 576, avg=564.11, stdev=16.01, samples=19 00:35:01.086 lat (msec) : 50=100.00% 00:35:01.086 cpu : usr=98.58%, sys=0.82%, ctx=156, majf=0, minf=46 00:35:01.086 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename1: (groupid=0, jobs=1): err= 0: pid=1913345: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=566, BW=2267KiB/s (2322kB/s)(22.1MiB/10003msec) 00:35:01.086 slat (usec): min=6, max=125, avg=43.42, stdev=19.30 00:35:01.086 clat (usec): min=6038, max=43807, avg=27814.70, stdev=1982.84 00:35:01.086 lat (usec): min=6046, max=43823, avg=27858.13, stdev=1984.56 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[25297], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.086 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.086 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.086 | 99.00th=[28967], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:35:01.086 | 99.99th=[43779] 00:35:01.086 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2256.53, stdev=63.33, samples=19 00:35:01.086 iops : min= 542, max= 576, avg=564.05, stdev=15.86, samples=19 00:35:01.086 lat (msec) : 10=0.46%, 20=0.28%, 50=99.26% 00:35:01.086 cpu : usr=99.34%, sys=0.38%, ctx=7, majf=0, minf=51 00:35:01.086 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename2: (groupid=0, jobs=1): err= 0: pid=1913346: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.1MiB/10013msec) 00:35:01.086 slat (usec): min=6, max=128, avg=29.42, stdev=22.97 00:35:01.086 clat (usec): min=24558, max=42747, avg=28149.64, stdev=840.23 00:35:01.086 lat (usec): min=24566, max=42767, avg=28179.06, stdev=835.99 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:35:01.086 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.086 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:01.086 | 99.00th=[28967], 99.50th=[29230], 99.90th=[42730], 99.95th=[42730], 00:35:01.086 | 99.99th=[42730] 00:35:01.086 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.58, stdev=63.24, samples=19 00:35:01.086 iops : min= 544, max= 576, avg=564.11, stdev=15.78, samples=19 00:35:01.086 lat (msec) : 50=100.00% 00:35:01.086 cpu : usr=99.04%, sys=0.60%, ctx=48, majf=0, minf=36 00:35:01.086 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.086 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.086 filename2: (groupid=0, jobs=1): err= 0: pid=1913347: Mon Jul 22 18:12:03 2024 00:35:01.086 read: IOPS=564, BW=2258KiB/s (2312kB/s)(22.1MiB/10007msec) 00:35:01.086 slat (nsec): min=7216, max=86062, avg=15102.90, stdev=9111.17 00:35:01.086 clat (usec): min=24881, max=42747, avg=28225.22, stdev=824.69 00:35:01.086 lat (usec): min=24889, max=42778, avg=28240.33, stdev=823.93 00:35:01.086 clat percentiles (usec): 00:35:01.086 | 1.00th=[27657], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:35:01.086 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.086 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:01.086 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42730], 99.95th=[42730], 00:35:01.086 | 99.99th=[42730] 00:35:01.086 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.84, stdev=63.44, samples=19 00:35:01.086 iops : min= 544, max= 576, avg=564.21, stdev=15.86, samples=19 00:35:01.086 lat (msec) : 50=100.00% 00:35:01.086 cpu : usr=99.17%, sys=0.54%, ctx=13, majf=0, minf=55 00:35:01.086 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:01.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.086 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 filename2: (groupid=0, jobs=1): err= 0: pid=1913348: Mon Jul 22 18:12:03 2024 00:35:01.087 read: IOPS=564, BW=2256KiB/s (2310kB/s)(22.1MiB/10013msec) 00:35:01.087 slat (usec): min=7, max=128, avg=43.18, stdev=21.01 00:35:01.087 clat (usec): min=22894, max=48062, avg=28015.79, stdev=878.14 00:35:01.087 lat (usec): min=22904, max=48086, avg=28058.97, stdev=874.55 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.087 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:35:01.087 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:01.087 | 99.00th=[28967], 99.50th=[28967], 99.90th=[42206], 99.95th=[42206], 00:35:01.087 | 99.99th=[47973] 00:35:01.087 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2256.79, stdev=62.96, samples=19 00:35:01.087 iops : min= 544, max= 576, avg=564.16, stdev=15.71, samples=19 00:35:01.087 lat (msec) : 50=100.00% 00:35:01.087 cpu : usr=99.12%, sys=0.50%, ctx=49, majf=0, minf=59 00:35:01.087 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 filename2: (groupid=0, jobs=1): err= 0: pid=1913349: Mon Jul 22 18:12:03 2024 00:35:01.087 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10006msec) 00:35:01.087 slat (nsec): min=7205, max=64893, avg=11334.94, stdev=6760.05 00:35:01.087 clat (usec): min=3160, max=33026, avg=27922.46, stdev=2553.88 00:35:01.087 lat (usec): min=3177, max=33035, avg=27933.79, stdev=2552.94 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[ 7832], 5.00th=[27919], 10.00th=[27919], 20.00th=[28181], 00:35:01.087 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.087 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:35:01.087 | 99.00th=[28967], 99.50th=[29230], 99.90th=[32900], 99.95th=[32900], 00:35:01.087 | 99.99th=[32900] 00:35:01.087 bw ( KiB/s): min= 2176, max= 2682, per=4.20%, avg=2283.47, stdev=112.94, samples=19 00:35:01.087 iops : min= 544, max= 670, avg=570.84, stdev=28.14, samples=19 00:35:01.087 lat (msec) : 4=0.46%, 10=0.67%, 20=0.28%, 50=98.60% 00:35:01.087 cpu : usr=99.04%, sys=0.57%, ctx=79, majf=0, minf=57 00:35:01.087 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 filename2: (groupid=0, jobs=1): err= 0: pid=1913350: Mon Jul 22 18:12:03 2024 00:35:01.087 read: IOPS=566, BW=2265KiB/s (2320kB/s)(22.1MiB/10004msec) 00:35:01.087 slat (usec): min=5, max=116, avg=18.59, stdev=17.65 00:35:01.087 clat (usec): min=6115, max=50311, avg=28124.74, stdev=2429.92 00:35:01.087 lat (usec): min=6138, max=50319, avg=28143.34, stdev=2429.43 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[18482], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:35:01.087 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:35:01.087 | 70.00th=[28181], 80.00th=[28443], 90.00th=[28443], 95.00th=[28705], 00:35:01.087 | 99.00th=[34866], 99.50th=[40109], 99.90th=[50070], 99.95th=[50070], 00:35:01.087 | 99.99th=[50070] 00:35:01.087 bw ( KiB/s): min= 2160, max= 2336, per=4.16%, avg=2260.53, stdev=52.87, samples=19 00:35:01.087 iops : min= 540, max= 584, avg=565.05, stdev=13.27, samples=19 00:35:01.087 lat (msec) : 10=0.49%, 20=0.71%, 50=98.66%, 100=0.14% 00:35:01.087 cpu : usr=99.31%, sys=0.39%, ctx=20, majf=0, minf=73 00:35:01.087 IO depths : 1=0.4%, 2=2.7%, 4=9.4%, 8=71.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=91.2%, 8=6.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=5666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 filename2: (groupid=0, jobs=1): err= 0: pid=1913351: Mon Jul 22 18:12:03 2024 00:35:01.087 read: IOPS=566, BW=2265KiB/s (2319kB/s)(22.1MiB/10003msec) 00:35:01.087 slat (usec): min=5, max=127, avg=44.73, stdev=18.93 00:35:01.087 clat (usec): min=5752, max=44033, avg=27833.20, stdev=1741.12 00:35:01.087 lat (usec): min=5760, max=44048, avg=27877.93, stdev=1742.64 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.087 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.087 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.087 | 99.00th=[28967], 99.50th=[29230], 99.90th=[43779], 99.95th=[43779], 00:35:01.087 | 99.99th=[43779] 00:35:01.087 bw ( KiB/s): min= 2171, max= 2304, per=4.15%, avg=2256.32, stdev=63.60, samples=19 00:35:01.087 iops : min= 542, max= 576, avg=564.00, stdev=15.93, samples=19 00:35:01.087 lat (msec) : 10=0.28%, 20=0.28%, 50=99.44% 00:35:01.087 cpu : usr=99.22%, sys=0.51%, ctx=12, majf=0, minf=32 00:35:01.087 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 filename2: (groupid=0, jobs=1): err= 0: pid=1913352: Mon Jul 22 18:12:03 2024 00:35:01.087 read: IOPS=564, BW=2259KiB/s (2313kB/s)(22.1MiB/10003msec) 00:35:01.087 slat (usec): min=7, max=128, avg=44.60, stdev=19.25 00:35:01.087 clat (usec): min=10747, max=49696, avg=27941.01, stdev=1638.32 00:35:01.087 lat (usec): min=10776, max=49717, avg=27985.61, stdev=1638.11 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[25822], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.087 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.087 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:35:01.087 | 99.00th=[29230], 99.50th=[33817], 99.90th=[49546], 99.95th=[49546], 00:35:01.087 | 99.99th=[49546] 00:35:01.087 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2256.84, stdev=72.24, samples=19 00:35:01.087 iops : min= 512, max= 576, avg=564.21, stdev=18.06, samples=19 00:35:01.087 lat (msec) : 20=0.42%, 50=99.58% 00:35:01.087 cpu : usr=98.99%, sys=0.61%, ctx=119, majf=0, minf=48 00:35:01.087 IO depths : 1=4.8%, 2=10.9%, 4=24.7%, 8=51.8%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=5648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 filename2: (groupid=0, jobs=1): err= 0: pid=1913354: Mon Jul 22 18:12:03 2024 00:35:01.087 read: IOPS=565, BW=2261KiB/s (2315kB/s)(22.1MiB/10020msec) 00:35:01.087 slat (usec): min=7, max=113, avg=43.19, stdev=19.11 00:35:01.087 clat (usec): min=20728, max=30835, avg=27937.47, stdev=491.81 00:35:01.087 lat (usec): min=20752, max=30884, avg=27980.66, stdev=491.27 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[27395], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:35:01.087 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:35:01.087 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28443], 00:35:01.087 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30016], 99.95th=[30016], 00:35:01.087 | 99.99th=[30802] 00:35:01.087 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2258.95, stdev=62.46, samples=20 00:35:01.087 iops : min= 544, max= 576, avg=564.70, stdev=15.59, samples=20 00:35:01.087 lat (msec) : 50=100.00% 00:35:01.087 cpu : usr=98.26%, sys=1.04%, ctx=110, majf=0, minf=45 00:35:01.087 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=5664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:01.087 00:35:01.087 Run status group 0 (all jobs): 00:35:01.087 READ: bw=53.1MiB/s (55.6MB/s), 2256KiB/s-2321KiB/s (2310kB/s-2377kB/s), io=532MiB (558MB), run=10002-10020msec 00:35:01.087 18:12:03 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:01.087 18:12:03 -- target/dif.sh@43 -- # local sub 00:35:01.087 18:12:03 -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.087 18:12:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:01.087 18:12:03 -- target/dif.sh@36 -- # local sub_id=0 00:35:01.087 18:12:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.087 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.087 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.087 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.087 18:12:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:01.087 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.087 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.087 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.087 18:12:03 -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.087 18:12:03 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:01.087 18:12:03 -- target/dif.sh@36 -- # local sub_id=1 00:35:01.087 18:12:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.087 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.087 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.087 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.087 18:12:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:01.087 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.087 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.087 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.087 18:12:03 -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.087 18:12:03 -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:01.087 18:12:03 -- target/dif.sh@36 -- # local sub_id=2 00:35:01.088 18:12:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@115 -- # NULL_DIF=1 00:35:01.088 18:12:03 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:01.088 18:12:03 -- target/dif.sh@115 -- # numjobs=2 00:35:01.088 18:12:03 -- target/dif.sh@115 -- # iodepth=8 00:35:01.088 18:12:03 -- target/dif.sh@115 -- # runtime=5 00:35:01.088 18:12:03 -- target/dif.sh@115 -- # files=1 00:35:01.088 18:12:03 -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:01.088 18:12:03 -- target/dif.sh@28 -- # local sub 00:35:01.088 18:12:03 -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.088 18:12:03 -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.088 18:12:03 -- target/dif.sh@18 -- # local sub_id=0 00:35:01.088 18:12:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 bdev_null0 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 [2024-07-22 18:12:03.456867] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.088 18:12:03 -- target/dif.sh@31 -- # create_subsystem 1 00:35:01.088 18:12:03 -- target/dif.sh@18 -- # local sub_id=1 00:35:01.088 18:12:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 bdev_null1 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.088 18:12:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:01.088 18:12:03 -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 18:12:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:01.088 18:12:03 -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:01.088 18:12:03 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:01.088 18:12:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:01.088 18:12:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.088 18:12:03 -- nvmf/common.sh@520 -- # config=() 00:35:01.088 18:12:03 -- nvmf/common.sh@520 -- # local subsystem config 00:35:01.088 18:12:03 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.088 18:12:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:01.088 18:12:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:01.088 18:12:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:01.088 { 00:35:01.088 "params": { 00:35:01.088 "name": "Nvme$subsystem", 00:35:01.088 "trtype": "$TEST_TRANSPORT", 00:35:01.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.088 "adrfam": "ipv4", 00:35:01.088 "trsvcid": "$NVMF_PORT", 00:35:01.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.088 "hdgst": ${hdgst:-false}, 00:35:01.088 "ddgst": ${ddgst:-false} 00:35:01.088 }, 00:35:01.088 "method": "bdev_nvme_attach_controller" 00:35:01.088 } 00:35:01.088 EOF 00:35:01.088 )") 00:35:01.088 18:12:03 -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.088 18:12:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.088 18:12:03 -- target/dif.sh@54 -- # local file 00:35:01.088 18:12:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:01.088 18:12:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.088 18:12:03 -- target/dif.sh@56 -- # cat 00:35:01.088 18:12:03 -- common/autotest_common.sh@1320 -- # shift 00:35:01.088 18:12:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:01.088 18:12:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.088 18:12:03 -- nvmf/common.sh@542 -- # cat 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:01.088 18:12:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.088 18:12:03 -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.088 18:12:03 -- target/dif.sh@73 -- # cat 00:35:01.088 18:12:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:01.088 18:12:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:01.088 { 00:35:01.088 "params": { 00:35:01.088 "name": "Nvme$subsystem", 00:35:01.088 "trtype": "$TEST_TRANSPORT", 00:35:01.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.088 "adrfam": "ipv4", 00:35:01.088 "trsvcid": "$NVMF_PORT", 00:35:01.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.088 "hdgst": ${hdgst:-false}, 00:35:01.088 "ddgst": ${ddgst:-false} 00:35:01.088 }, 00:35:01.088 "method": "bdev_nvme_attach_controller" 00:35:01.088 } 00:35:01.088 EOF 00:35:01.088 )") 00:35:01.088 18:12:03 -- target/dif.sh@72 -- # (( file++ )) 00:35:01.088 18:12:03 -- nvmf/common.sh@542 -- # cat 00:35:01.088 18:12:03 -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.088 18:12:03 -- nvmf/common.sh@544 -- # jq . 00:35:01.088 18:12:03 -- nvmf/common.sh@545 -- # IFS=, 00:35:01.088 18:12:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:01.088 "params": { 00:35:01.088 "name": "Nvme0", 00:35:01.088 "trtype": "tcp", 00:35:01.088 "traddr": "10.0.0.2", 00:35:01.088 "adrfam": "ipv4", 00:35:01.088 "trsvcid": "4420", 00:35:01.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.088 "hdgst": false, 00:35:01.088 "ddgst": false 00:35:01.088 }, 00:35:01.088 "method": "bdev_nvme_attach_controller" 00:35:01.088 },{ 00:35:01.088 "params": { 00:35:01.088 "name": "Nvme1", 00:35:01.088 "trtype": "tcp", 00:35:01.088 "traddr": "10.0.0.2", 00:35:01.088 "adrfam": "ipv4", 00:35:01.088 "trsvcid": "4420", 00:35:01.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:01.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:01.088 "hdgst": false, 00:35:01.088 "ddgst": false 00:35:01.088 }, 00:35:01.088 "method": "bdev_nvme_attach_controller" 00:35:01.088 }' 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:01.088 18:12:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:01.088 18:12:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:01.088 18:12:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:01.088 18:12:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:01.088 18:12:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:01.088 18:12:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.088 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:01.088 ... 00:35:01.088 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:01.088 ... 00:35:01.088 fio-3.35 00:35:01.088 Starting 4 threads 00:35:01.088 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.088 [2024-07-22 18:12:04.538883] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:01.089 [2024-07-22 18:12:04.538929] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:06.369 00:35:06.369 filename0: (groupid=0, jobs=1): err= 0: pid=1915498: Mon Jul 22 18:12:09 2024 00:35:06.369 read: IOPS=2358, BW=18.4MiB/s (19.3MB/s)(92.2MiB/5002msec) 00:35:06.369 slat (nsec): min=11393, max=84443, avg=25145.36, stdev=2818.02 00:35:06.369 clat (usec): min=1413, max=5591, avg=3325.55, stdev=387.64 00:35:06.369 lat (usec): min=1437, max=5616, avg=3350.69, stdev=387.86 00:35:06.369 clat percentiles (usec): 00:35:06.369 | 1.00th=[ 2409], 5.00th=[ 2737], 10.00th=[ 2933], 20.00th=[ 3064], 00:35:06.369 | 30.00th=[ 3130], 40.00th=[ 3228], 50.00th=[ 3294], 60.00th=[ 3326], 00:35:06.369 | 70.00th=[ 3425], 80.00th=[ 3589], 90.00th=[ 3818], 95.00th=[ 4047], 00:35:06.369 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[ 5407], 00:35:06.369 | 99.99th=[ 5473] 00:35:06.369 bw ( KiB/s): min=18624, max=18992, per=24.73%, avg=18858.67, stdev=127.25, samples=9 00:35:06.369 iops : min= 2328, max= 2374, avg=2357.33, stdev=15.91, samples=9 00:35:06.369 lat (msec) : 2=0.14%, 4=94.02%, 10=5.85% 00:35:06.369 cpu : usr=95.48%, sys=3.78%, ctx=14, majf=0, minf=29 00:35:06.369 IO depths : 1=0.1%, 2=0.9%, 4=70.2%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.369 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.369 issued rwts: total=11799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.369 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:06.369 filename0: (groupid=0, jobs=1): err= 0: pid=1915499: Mon Jul 22 18:12:09 2024 00:35:06.369 read: IOPS=2342, BW=18.3MiB/s (19.2MB/s)(91.6MiB/5003msec) 00:35:06.369 slat (nsec): min=7200, max=76426, avg=8411.04, stdev=3204.34 00:35:06.369 clat (usec): min=708, max=5882, avg=3391.69, stdev=605.18 00:35:06.369 lat (usec): min=727, max=5889, avg=3400.10, stdev=605.16 00:35:06.369 clat percentiles (usec): 00:35:06.369 | 1.00th=[ 2409], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 2999], 00:35:06.369 | 30.00th=[ 3130], 40.00th=[ 3228], 50.00th=[ 3261], 60.00th=[ 3294], 00:35:06.369 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 4424], 95.00th=[ 4817], 00:35:06.369 | 99.00th=[ 5342], 99.50th=[ 5473], 99.90th=[ 5604], 99.95th=[ 5735], 00:35:06.369 | 99.99th=[ 5866] 00:35:06.369 bw ( KiB/s): min=18544, max=18944, per=24.58%, avg=18745.60, stdev=144.53, samples=10 00:35:06.369 iops : min= 2318, max= 2368, avg=2343.20, stdev=18.07, samples=10 00:35:06.369 lat (usec) : 750=0.01% 00:35:06.369 lat (msec) : 2=0.09%, 4=86.28%, 10=13.62% 00:35:06.369 cpu : usr=97.40%, sys=2.34%, ctx=6, majf=0, minf=46 00:35:06.369 IO depths : 1=0.1%, 2=0.1%, 4=72.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.369 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.369 issued rwts: total=11721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.369 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:06.369 filename1: (groupid=0, jobs=1): err= 0: pid=1915500: Mon Jul 22 18:12:09 2024 00:35:06.369 read: IOPS=2464, BW=19.3MiB/s (20.2MB/s)(96.3MiB/5002msec) 00:35:06.369 slat (nsec): min=7198, max=77963, avg=8243.42, stdev=2829.86 00:35:06.369 clat (usec): min=1741, max=5646, avg=3224.67, stdev=394.69 00:35:06.369 lat (usec): min=1748, max=5655, avg=3232.92, stdev=394.76 00:35:06.369 clat percentiles (usec): 00:35:06.369 | 1.00th=[ 2245], 5.00th=[ 2606], 10.00th=[ 2769], 20.00th=[ 2966], 00:35:06.369 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3228], 60.00th=[ 3294], 00:35:06.369 | 70.00th=[ 3326], 80.00th=[ 3425], 90.00th=[ 3654], 95.00th=[ 3949], 00:35:06.369 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5276], 99.95th=[ 5342], 00:35:06.369 | 99.99th=[ 5669] 00:35:06.369 bw ( KiB/s): min=19456, max=19856, per=25.85%, avg=19715.56, stdev=151.95, samples=9 00:35:06.369 iops : min= 2432, max= 2482, avg=2464.44, stdev=18.99, samples=9 00:35:06.369 lat (msec) : 2=0.19%, 4=95.40%, 10=4.40% 00:35:06.369 cpu : usr=97.38%, sys=2.36%, ctx=5, majf=0, minf=40 00:35:06.370 IO depths : 1=0.1%, 2=0.3%, 4=69.7%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.370 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.370 issued rwts: total=12327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.370 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:06.370 filename1: (groupid=0, jobs=1): err= 0: pid=1915501: Mon Jul 22 18:12:09 2024 00:35:06.370 read: IOPS=2367, BW=18.5MiB/s (19.4MB/s)(92.5MiB/5002msec) 00:35:06.370 slat (nsec): min=7195, max=76103, avg=8540.59, stdev=2856.49 00:35:06.370 clat (usec): min=1328, max=6157, avg=3355.38, stdev=407.93 00:35:06.370 lat (usec): min=1336, max=6172, avg=3363.92, stdev=408.00 00:35:06.370 clat percentiles (usec): 00:35:06.370 | 1.00th=[ 2638], 5.00th=[ 2900], 10.00th=[ 3032], 20.00th=[ 3130], 00:35:06.370 | 30.00th=[ 3163], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3326], 00:35:06.370 | 70.00th=[ 3359], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3982], 00:35:06.370 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5538], 99.95th=[ 5932], 00:35:06.370 | 99.99th=[ 6128] 00:35:06.370 bw ( KiB/s): min=18720, max=19104, per=24.84%, avg=18944.00, stdev=147.51, samples=9 00:35:06.370 iops : min= 2340, max= 2388, avg=2368.00, stdev=18.44, samples=9 00:35:06.370 lat (msec) : 2=0.15%, 4=94.97%, 10=4.88% 00:35:06.370 cpu : usr=97.50%, sys=2.14%, ctx=66, majf=0, minf=58 00:35:06.370 IO depths : 1=0.1%, 2=0.1%, 4=73.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:06.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.370 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:06.370 issued rwts: total=11840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:06.370 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:06.370 00:35:06.370 Run status group 0 (all jobs): 00:35:06.370 READ: bw=74.5MiB/s (78.1MB/s), 18.3MiB/s-19.3MiB/s (19.2MB/s-20.2MB/s), io=373MiB (391MB), run=5002-5003msec 00:35:06.370 18:12:09 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:06.370 18:12:09 -- target/dif.sh@43 -- # local sub 00:35:06.370 18:12:09 -- target/dif.sh@45 -- # for sub in "$@" 00:35:06.370 18:12:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:06.370 18:12:09 -- target/dif.sh@36 -- # local sub_id=0 00:35:06.370 18:12:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@45 -- # for sub in "$@" 00:35:06.370 18:12:09 -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:06.370 18:12:09 -- target/dif.sh@36 -- # local sub_id=1 00:35:06.370 18:12:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 00:35:06.370 real 0m24.348s 00:35:06.370 user 5m5.310s 00:35:06.370 sys 0m3.773s 00:35:06.370 18:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 ************************************ 00:35:06.370 END TEST fio_dif_rand_params 00:35:06.370 ************************************ 00:35:06.370 18:12:09 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:06.370 18:12:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:06.370 18:12:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 ************************************ 00:35:06.370 START TEST fio_dif_digest 00:35:06.370 ************************************ 00:35:06.370 18:12:09 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:35:06.370 18:12:09 -- target/dif.sh@123 -- # local NULL_DIF 00:35:06.370 18:12:09 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:06.370 18:12:09 -- target/dif.sh@125 -- # local hdgst ddgst 00:35:06.370 18:12:09 -- target/dif.sh@127 -- # NULL_DIF=3 00:35:06.370 18:12:09 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:06.370 18:12:09 -- target/dif.sh@127 -- # numjobs=3 00:35:06.370 18:12:09 -- target/dif.sh@127 -- # iodepth=3 00:35:06.370 18:12:09 -- target/dif.sh@127 -- # runtime=10 00:35:06.370 18:12:09 -- target/dif.sh@128 -- # hdgst=true 00:35:06.370 18:12:09 -- target/dif.sh@128 -- # ddgst=true 00:35:06.370 18:12:09 -- target/dif.sh@130 -- # create_subsystems 0 00:35:06.370 18:12:09 -- target/dif.sh@28 -- # local sub 00:35:06.370 18:12:09 -- target/dif.sh@30 -- # for sub in "$@" 00:35:06.370 18:12:09 -- target/dif.sh@31 -- # create_subsystem 0 00:35:06.370 18:12:09 -- target/dif.sh@18 -- # local sub_id=0 00:35:06.370 18:12:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 bdev_null0 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:06.370 18:12:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:06.370 18:12:09 -- common/autotest_common.sh@10 -- # set +x 00:35:06.370 [2024-07-22 18:12:09.937627] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:06.370 18:12:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:06.370 18:12:09 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:06.370 18:12:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:06.370 18:12:09 -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:06.370 18:12:09 -- nvmf/common.sh@520 -- # config=() 00:35:06.370 18:12:09 -- nvmf/common.sh@520 -- # local subsystem config 00:35:06.370 18:12:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:35:06.370 18:12:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:35:06.370 { 00:35:06.370 "params": { 00:35:06.370 "name": "Nvme$subsystem", 00:35:06.370 "trtype": "$TEST_TRANSPORT", 00:35:06.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:06.370 "adrfam": "ipv4", 00:35:06.370 "trsvcid": "$NVMF_PORT", 00:35:06.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:06.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:06.370 "hdgst": ${hdgst:-false}, 00:35:06.370 "ddgst": ${ddgst:-false} 00:35:06.370 }, 00:35:06.370 "method": "bdev_nvme_attach_controller" 00:35:06.370 } 00:35:06.370 EOF 00:35:06.370 )") 00:35:06.370 18:12:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.370 18:12:09 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.370 18:12:09 -- target/dif.sh@82 -- # gen_fio_conf 00:35:06.370 18:12:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:35:06.370 18:12:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:06.370 18:12:09 -- target/dif.sh@54 -- # local file 00:35:06.370 18:12:09 -- nvmf/common.sh@542 -- # cat 00:35:06.370 18:12:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:35:06.370 18:12:09 -- target/dif.sh@56 -- # cat 00:35:06.370 18:12:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.370 18:12:09 -- common/autotest_common.sh@1320 -- # shift 00:35:06.370 18:12:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:35:06.370 18:12:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.370 18:12:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.370 18:12:09 -- nvmf/common.sh@544 -- # jq . 00:35:06.370 18:12:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:35:06.370 18:12:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:35:06.370 18:12:09 -- target/dif.sh@72 -- # (( file <= files )) 00:35:06.370 18:12:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:06.370 18:12:09 -- nvmf/common.sh@545 -- # IFS=, 00:35:06.370 18:12:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:35:06.370 "params": { 00:35:06.370 "name": "Nvme0", 00:35:06.370 "trtype": "tcp", 00:35:06.370 "traddr": "10.0.0.2", 00:35:06.370 "adrfam": "ipv4", 00:35:06.370 "trsvcid": "4420", 00:35:06.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:06.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:06.370 "hdgst": true, 00:35:06.370 "ddgst": true 00:35:06.370 }, 00:35:06.370 "method": "bdev_nvme_attach_controller" 00:35:06.370 }' 00:35:06.370 18:12:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:06.370 18:12:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:06.370 18:12:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:35:06.370 18:12:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:06.370 18:12:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:35:06.371 18:12:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:35:06.371 18:12:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:35:06.371 18:12:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:35:06.371 18:12:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:06.371 18:12:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:06.371 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:06.371 ... 00:35:06.371 fio-3.35 00:35:06.371 Starting 3 threads 00:35:06.371 EAL: No free 2048 kB hugepages reported on node 1 00:35:06.371 [2024-07-22 18:12:10.608796] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:35:06.371 [2024-07-22 18:12:10.608854] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:35:18.608 00:35:18.608 filename0: (groupid=0, jobs=1): err= 0: pid=1916880: Mon Jul 22 18:12:20 2024 00:35:18.608 read: IOPS=297, BW=37.2MiB/s (39.1MB/s)(374MiB/10046msec) 00:35:18.608 slat (nsec): min=3758, max=44825, avg=8302.69, stdev=1011.99 00:35:18.608 clat (usec): min=5741, max=50575, avg=10044.69, stdev=1410.56 00:35:18.608 lat (usec): min=5750, max=50583, avg=10052.99, stdev=1410.57 00:35:18.608 clat percentiles (usec): 00:35:18.608 | 1.00th=[ 6849], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[ 9372], 00:35:18.608 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:35:18.608 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:35:18.608 | 99.00th=[11863], 99.50th=[12125], 99.90th=[13304], 99.95th=[45876], 00:35:18.608 | 99.99th=[50594] 00:35:18.608 bw ( KiB/s): min=36096, max=41984, per=41.28%, avg=38284.80, stdev=1244.41, samples=20 00:35:18.608 iops : min= 282, max= 328, avg=299.10, stdev= 9.72, samples=20 00:35:18.608 lat (msec) : 10=43.84%, 20=56.10%, 50=0.03%, 100=0.03% 00:35:18.608 cpu : usr=96.38%, sys=3.38%, ctx=21, majf=0, minf=161 00:35:18.608 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.608 issued rwts: total=2993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.608 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:18.608 filename0: (groupid=0, jobs=1): err= 0: pid=1916881: Mon Jul 22 18:12:20 2024 00:35:18.608 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10047msec) 00:35:18.608 slat (nsec): min=3143, max=18670, avg=6739.05, stdev=703.94 00:35:18.608 clat (usec): min=7080, max=56489, avg=13970.38, stdev=3831.11 00:35:18.608 lat (usec): min=7086, max=56496, avg=13977.12, stdev=3831.10 00:35:18.608 clat percentiles (usec): 00:35:18.608 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[12125], 20.00th=[12780], 00:35:18.608 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:35:18.608 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15664], 00:35:18.608 | 99.00th=[17171], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:35:18.608 | 99.99th=[56361] 00:35:18.608 bw ( KiB/s): min=22272, max=30208, per=29.68%, avg=27532.80, stdev=1662.13, samples=20 00:35:18.608 iops : min= 174, max= 236, avg=215.10, stdev=12.99, samples=20 00:35:18.608 lat (msec) : 10=1.49%, 20=97.72%, 50=0.05%, 100=0.74% 00:35:18.608 cpu : usr=95.70%, sys=4.09%, ctx=21, majf=0, minf=147 00:35:18.608 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.608 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.608 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:18.608 filename0: (groupid=0, jobs=1): err= 0: pid=1916882: Mon Jul 22 18:12:20 2024 00:35:18.608 read: IOPS=212, BW=26.6MiB/s (27.8MB/s)(267MiB/10044msec) 00:35:18.608 slat (nsec): min=3919, max=38958, avg=8950.97, stdev=1655.03 00:35:18.608 clat (usec): min=8701, max=56048, avg=14088.41, stdev=4024.04 00:35:18.608 lat (usec): min=8710, max=56056, avg=14097.36, stdev=4024.04 00:35:18.608 clat percentiles (usec): 00:35:18.608 | 1.00th=[10552], 5.00th=[11863], 10.00th=[12387], 20.00th=[12780], 00:35:18.608 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:35:18.608 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15139], 95.00th=[15664], 00:35:18.608 | 99.00th=[17957], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:35:18.608 | 99.99th=[55837] 00:35:18.608 bw ( KiB/s): min=24576, max=29184, per=29.42%, avg=27289.60, stdev=1378.35, samples=20 00:35:18.608 iops : min= 192, max= 228, avg=213.20, stdev=10.77, samples=20 00:35:18.608 lat (msec) : 10=0.28%, 20=98.78%, 50=0.05%, 100=0.89% 00:35:18.608 cpu : usr=90.95%, sys=6.20%, ctx=525, majf=0, minf=107 00:35:18.608 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.608 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.608 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:18.608 00:35:18.608 Run status group 0 (all jobs): 00:35:18.608 READ: bw=90.6MiB/s (95.0MB/s), 26.6MiB/s-37.2MiB/s (27.8MB/s-39.1MB/s), io=910MiB (954MB), run=10044-10047msec 00:35:18.608 18:12:20 -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:18.608 18:12:20 -- target/dif.sh@43 -- # local sub 00:35:18.608 18:12:20 -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.608 18:12:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:18.608 18:12:20 -- target/dif.sh@36 -- # local sub_id=0 00:35:18.608 18:12:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.608 18:12:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:18.608 18:12:20 -- common/autotest_common.sh@10 -- # set +x 00:35:18.608 18:12:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:18.608 18:12:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:18.609 18:12:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:18.609 18:12:20 -- common/autotest_common.sh@10 -- # set +x 00:35:18.609 18:12:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:18.609 00:35:18.609 real 0m11.069s 00:35:18.609 user 0m40.278s 00:35:18.609 sys 0m1.657s 00:35:18.609 18:12:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:18.609 18:12:20 -- common/autotest_common.sh@10 -- # set +x 00:35:18.609 ************************************ 00:35:18.609 END TEST fio_dif_digest 00:35:18.609 ************************************ 00:35:18.609 18:12:21 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:18.609 18:12:21 -- target/dif.sh@147 -- # nvmftestfini 00:35:18.609 18:12:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:35:18.609 18:12:21 -- nvmf/common.sh@116 -- # sync 00:35:18.609 18:12:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:35:18.609 18:12:21 -- nvmf/common.sh@119 -- # set +e 00:35:18.609 18:12:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:35:18.609 18:12:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:35:18.609 rmmod nvme_tcp 00:35:18.609 rmmod nvme_fabrics 00:35:18.609 rmmod nvme_keyring 00:35:18.609 18:12:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:35:18.609 18:12:21 -- nvmf/common.sh@123 -- # set -e 00:35:18.609 18:12:21 -- nvmf/common.sh@124 -- # return 0 00:35:18.609 18:12:21 -- nvmf/common.sh@477 -- # '[' -n 1907465 ']' 00:35:18.609 18:12:21 -- nvmf/common.sh@478 -- # killprocess 1907465 00:35:18.609 18:12:21 -- common/autotest_common.sh@926 -- # '[' -z 1907465 ']' 00:35:18.609 18:12:21 -- common/autotest_common.sh@930 -- # kill -0 1907465 00:35:18.609 18:12:21 -- common/autotest_common.sh@931 -- # uname 00:35:18.609 18:12:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:18.609 18:12:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1907465 00:35:18.609 18:12:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:18.609 18:12:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:18.609 18:12:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1907465' 00:35:18.609 killing process with pid 1907465 00:35:18.609 18:12:21 -- common/autotest_common.sh@945 -- # kill 1907465 00:35:18.609 18:12:21 -- common/autotest_common.sh@950 -- # wait 1907465 00:35:18.609 18:12:21 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:35:18.609 18:12:21 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:20.650 Waiting for block devices as requested 00:35:20.650 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:20.910 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:20.910 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:20.910 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:21.171 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:21.171 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:21.171 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:21.432 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:21.432 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:35:21.432 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:21.692 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:21.692 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:21.692 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:21.692 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:21.953 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:21.953 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:21.953 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:21.953 18:12:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:35:21.953 18:12:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:35:21.953 18:12:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:21.953 18:12:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:35:21.953 18:12:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:21.953 18:12:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:21.953 18:12:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.499 18:12:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:35:24.499 00:35:24.499 real 1m18.416s 00:35:24.499 user 7m34.284s 00:35:24.499 sys 0m20.316s 00:35:24.499 18:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:24.499 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:35:24.499 ************************************ 00:35:24.499 END TEST nvmf_dif 00:35:24.499 ************************************ 00:35:24.499 18:12:28 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:24.499 18:12:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:24.499 18:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:24.499 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:35:24.499 ************************************ 00:35:24.499 START TEST nvmf_abort_qd_sizes 00:35:24.499 ************************************ 00:35:24.499 18:12:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:24.499 * Looking for test storage... 00:35:24.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:24.499 18:12:28 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.499 18:12:28 -- nvmf/common.sh@7 -- # uname -s 00:35:24.499 18:12:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.499 18:12:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.499 18:12:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.499 18:12:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.499 18:12:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.499 18:12:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.499 18:12:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.499 18:12:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.499 18:12:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.499 18:12:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.499 18:12:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:24.499 18:12:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80f8a7aa-1216-ec11-9bc7-a4bf018b228a 00:35:24.499 18:12:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.499 18:12:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.499 18:12:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.499 18:12:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.499 18:12:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.500 18:12:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.500 18:12:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.500 18:12:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.500 18:12:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.500 18:12:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.500 18:12:28 -- paths/export.sh@5 -- # export PATH 00:35:24.500 18:12:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.500 18:12:28 -- nvmf/common.sh@46 -- # : 0 00:35:24.500 18:12:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:35:24.500 18:12:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:35:24.500 18:12:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:35:24.500 18:12:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.500 18:12:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.500 18:12:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:35:24.500 18:12:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:35:24.500 18:12:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:35:24.500 18:12:28 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:35:24.500 18:12:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:35:24.500 18:12:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.500 18:12:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:35:24.500 18:12:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:35:24.500 18:12:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:35:24.500 18:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.500 18:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:24.500 18:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.500 18:12:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:35:24.500 18:12:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:35:24.500 18:12:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:35:24.500 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:35:32.643 18:12:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:35:32.643 18:12:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:35:32.643 18:12:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:35:32.643 18:12:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:35:32.643 18:12:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:35:32.643 18:12:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:35:32.643 18:12:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:35:32.643 18:12:36 -- nvmf/common.sh@294 -- # net_devs=() 00:35:32.643 18:12:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:35:32.643 18:12:36 -- nvmf/common.sh@295 -- # e810=() 00:35:32.643 18:12:36 -- nvmf/common.sh@295 -- # local -ga e810 00:35:32.643 18:12:36 -- nvmf/common.sh@296 -- # x722=() 00:35:32.643 18:12:36 -- nvmf/common.sh@296 -- # local -ga x722 00:35:32.643 18:12:36 -- nvmf/common.sh@297 -- # mlx=() 00:35:32.643 18:12:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:35:32.643 18:12:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.643 18:12:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:35:32.643 18:12:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:35:32.643 18:12:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:35:32.643 18:12:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:32.643 18:12:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:32.643 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:32.643 18:12:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:35:32.643 18:12:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:32.643 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:32.643 18:12:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:35:32.643 18:12:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:35:32.644 18:12:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:32.644 18:12:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.644 18:12:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:32.644 18:12:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.644 18:12:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:32.644 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:32.644 18:12:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.644 18:12:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:35:32.644 18:12:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.644 18:12:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:35:32.644 18:12:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.644 18:12:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:32.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:32.644 18:12:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.644 18:12:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:35:32.644 18:12:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:35:32.644 18:12:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:35:32.644 18:12:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:35:32.644 18:12:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.644 18:12:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.644 18:12:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.644 18:12:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:35:32.644 18:12:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.644 18:12:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.644 18:12:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:35:32.644 18:12:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.644 18:12:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.644 18:12:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:35:32.644 18:12:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:35:32.644 18:12:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.644 18:12:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.644 18:12:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.644 18:12:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.644 18:12:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:35:32.644 18:12:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.644 18:12:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.644 18:12:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.644 18:12:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:35:32.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:35:32.644 00:35:32.644 --- 10.0.0.2 ping statistics --- 00:35:32.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.644 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:35:32.644 18:12:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:35:32.644 00:35:32.644 --- 10.0.0.1 ping statistics --- 00:35:32.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.644 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:35:32.644 18:12:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.644 18:12:36 -- nvmf/common.sh@410 -- # return 0 00:35:32.644 18:12:36 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:35:32.644 18:12:36 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.845 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:35:36.845 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:35:38.229 0000:65:00.0 (8086 0a54): nvme -> vfio-pci 00:35:38.229 18:12:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:38.229 18:12:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:35:38.229 18:12:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:35:38.229 18:12:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:38.229 18:12:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:35:38.229 18:12:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:35:38.229 18:12:42 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:35:38.229 18:12:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:35:38.229 18:12:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:35:38.229 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:35:38.229 18:12:42 -- nvmf/common.sh@469 -- # nvmfpid=1926705 00:35:38.229 18:12:42 -- nvmf/common.sh@470 -- # waitforlisten 1926705 00:35:38.229 18:12:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:38.229 18:12:42 -- common/autotest_common.sh@819 -- # '[' -z 1926705 ']' 00:35:38.229 18:12:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.229 18:12:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:35:38.229 18:12:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.229 18:12:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:35:38.229 18:12:42 -- common/autotest_common.sh@10 -- # set +x 00:35:38.229 [2024-07-22 18:12:42.479237] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:35:38.229 [2024-07-22 18:12:42.479296] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:38.488 EAL: No free 2048 kB hugepages reported on node 1 00:35:38.488 [2024-07-22 18:12:42.573878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:38.488 [2024-07-22 18:12:42.667054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:35:38.488 [2024-07-22 18:12:42.667209] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:38.488 [2024-07-22 18:12:42.667217] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:38.488 [2024-07-22 18:12:42.667225] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:38.488 [2024-07-22 18:12:42.667360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.488 [2024-07-22 18:12:42.667406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:38.488 [2024-07-22 18:12:42.667584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:35:38.488 [2024-07-22 18:12:42.667588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.422 18:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:35:39.422 18:12:43 -- common/autotest_common.sh@852 -- # return 0 00:35:39.423 18:12:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:35:39.423 18:12:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:35:39.423 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:35:39.423 18:12:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:35:39.423 18:12:43 -- scripts/common.sh@311 -- # local bdf bdfs 00:35:39.423 18:12:43 -- scripts/common.sh@312 -- # local nvmes 00:35:39.423 18:12:43 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:35:39.423 18:12:43 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:39.423 18:12:43 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:35:39.423 18:12:43 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:35:39.423 18:12:43 -- scripts/common.sh@322 -- # uname -s 00:35:39.423 18:12:43 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:35:39.423 18:12:43 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:35:39.423 18:12:43 -- scripts/common.sh@327 -- # (( 1 )) 00:35:39.423 18:12:43 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:35:39.423 18:12:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:39.423 18:12:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:39.423 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:35:39.423 ************************************ 00:35:39.423 START TEST spdk_target_abort 00:35:39.423 ************************************ 00:35:39.423 18:12:43 -- common/autotest_common.sh@1104 -- # spdk_target 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:39.423 18:12:43 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:35:39.423 18:12:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:39.423 18:12:43 -- common/autotest_common.sh@10 -- # set +x 00:35:41.954 spdk_targetn1 00:35:41.954 18:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:41.954 18:12:46 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:41.954 18:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:41.954 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:35:42.212 [2024-07-22 18:12:46.233331] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.212 18:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:35:42.212 18:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.212 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:35:42.212 18:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:35:42.212 18:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.212 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:35:42.212 18:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:35:42.212 18:12:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:42.212 18:12:46 -- common/autotest_common.sh@10 -- # set +x 00:35:42.212 [2024-07-22 18:12:46.271313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.212 18:12:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:42.212 18:12:46 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:42.212 EAL: No free 2048 kB hugepages reported on node 1 00:35:45.494 Initializing NVMe Controllers 00:35:45.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:45.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:45.494 Initialization complete. Launching workers. 00:35:45.494 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 12974, failed: 0 00:35:45.494 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2605, failed to submit 10369 00:35:45.494 success 720, unsuccess 1885, failed 0 00:35:45.494 18:12:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.494 18:12:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:45.494 EAL: No free 2048 kB hugepages reported on node 1 00:35:48.815 Initializing NVMe Controllers 00:35:48.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:48.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:48.815 Initialization complete. Launching workers. 00:35:48.815 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8608, failed: 0 00:35:48.815 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1224, failed to submit 7384 00:35:48.815 success 327, unsuccess 897, failed 0 00:35:48.815 18:12:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.815 18:12:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:35:48.815 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.093 Initializing NVMe Controllers 00:35:52.093 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:35:52.093 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:35:52.093 Initialization complete. Launching workers. 00:35:52.093 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 43358, failed: 0 00:35:52.093 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2590, failed to submit 40768 00:35:52.093 success 595, unsuccess 1995, failed 0 00:35:52.094 18:12:56 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:35:52.094 18:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:52.094 18:12:56 -- common/autotest_common.sh@10 -- # set +x 00:35:52.094 18:12:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:52.094 18:12:56 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:52.094 18:12:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:35:52.094 18:12:56 -- common/autotest_common.sh@10 -- # set +x 00:35:54.623 18:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:35:54.623 18:12:58 -- target/abort_qd_sizes.sh@62 -- # killprocess 1926705 00:35:54.623 18:12:58 -- common/autotest_common.sh@926 -- # '[' -z 1926705 ']' 00:35:54.623 18:12:58 -- common/autotest_common.sh@930 -- # kill -0 1926705 00:35:54.623 18:12:58 -- common/autotest_common.sh@931 -- # uname 00:35:54.623 18:12:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:35:54.623 18:12:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1926705 00:35:54.623 18:12:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:35:54.623 18:12:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:35:54.623 18:12:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1926705' 00:35:54.623 killing process with pid 1926705 00:35:54.623 18:12:58 -- common/autotest_common.sh@945 -- # kill 1926705 00:35:54.623 18:12:58 -- common/autotest_common.sh@950 -- # wait 1926705 00:35:54.623 00:35:54.623 real 0m15.095s 00:35:54.623 user 1m0.842s 00:35:54.623 sys 0m1.712s 00:35:54.623 18:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:54.623 18:12:58 -- common/autotest_common.sh@10 -- # set +x 00:35:54.623 ************************************ 00:35:54.623 END TEST spdk_target_abort 00:35:54.623 ************************************ 00:35:54.623 18:12:58 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:35:54.623 18:12:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:35:54.623 18:12:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:35:54.623 18:12:58 -- common/autotest_common.sh@10 -- # set +x 00:35:54.623 ************************************ 00:35:54.623 START TEST kernel_target_abort 00:35:54.623 ************************************ 00:35:54.623 18:12:58 -- common/autotest_common.sh@1104 -- # kernel_target 00:35:54.623 18:12:58 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:35:54.623 18:12:58 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:35:54.623 18:12:58 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:35:54.623 18:12:58 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:35:54.623 18:12:58 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:35:54.623 18:12:58 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:54.623 18:12:58 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:54.623 18:12:58 -- nvmf/common.sh@627 -- # local block nvme 00:35:54.623 18:12:58 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:35:54.623 18:12:58 -- nvmf/common.sh@630 -- # modprobe nvmet 00:35:54.623 18:12:58 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:54.623 18:12:58 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:58.828 Waiting for block devices as requested 00:35:58.828 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:58.828 0000:65:00.0 (8086 0a54): vfio-pci -> nvme 00:35:59.166 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:35:59.166 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:35:59.166 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:35:59.166 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:35:59.457 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:35:59.457 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:35:59.457 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:35:59.457 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:35:59.457 18:13:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:35:59.457 18:13:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:59.457 18:13:03 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:35:59.457 18:13:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:35:59.457 18:13:03 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:59.717 No valid GPT data, bailing 00:35:59.717 18:13:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:59.717 18:13:03 -- scripts/common.sh@393 -- # pt= 00:35:59.717 18:13:03 -- scripts/common.sh@394 -- # return 1 00:35:59.717 18:13:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:35:59.717 18:13:03 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:35:59.717 18:13:03 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:35:59.717 18:13:03 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:35:59.717 18:13:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:59.717 18:13:03 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:35:59.717 18:13:03 -- nvmf/common.sh@654 -- # echo 1 00:35:59.717 18:13:03 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:35:59.717 18:13:03 -- nvmf/common.sh@656 -- # echo 1 00:35:59.717 18:13:03 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:35:59.717 18:13:03 -- nvmf/common.sh@663 -- # echo tcp 00:35:59.717 18:13:03 -- nvmf/common.sh@664 -- # echo 4420 00:35:59.717 18:13:03 -- nvmf/common.sh@665 -- # echo ipv4 00:35:59.717 18:13:03 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:59.717 18:13:03 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80f8a7aa-1216-ec11-9bc7-a4bf018b228a --hostid=80f8a7aa-1216-ec11-9bc7-a4bf018b228a -a 10.0.0.1 -t tcp -s 4420 00:35:59.717 00:35:59.717 Discovery Log Number of Records 2, Generation counter 2 00:35:59.717 =====Discovery Log Entry 0====== 00:35:59.717 trtype: tcp 00:35:59.717 adrfam: ipv4 00:35:59.717 subtype: current discovery subsystem 00:35:59.717 treq: not specified, sq flow control disable supported 00:35:59.717 portid: 1 00:35:59.717 trsvcid: 4420 00:35:59.717 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:59.717 traddr: 10.0.0.1 00:35:59.717 eflags: none 00:35:59.717 sectype: none 00:35:59.717 =====Discovery Log Entry 1====== 00:35:59.717 trtype: tcp 00:35:59.717 adrfam: ipv4 00:35:59.717 subtype: nvme subsystem 00:35:59.717 treq: not specified, sq flow control disable supported 00:35:59.717 portid: 1 00:35:59.717 trsvcid: 4420 00:35:59.717 subnqn: kernel_target 00:35:59.717 traddr: 10.0.0.1 00:35:59.717 eflags: none 00:35:59.717 sectype: none 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:59.717 18:13:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:35:59.717 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.005 Initializing NVMe Controllers 00:36:03.005 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:03.005 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:03.005 Initialization complete. Launching workers. 00:36:03.005 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 72139, failed: 0 00:36:03.005 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 72139, failed to submit 0 00:36:03.005 success 0, unsuccess 72139, failed 0 00:36:03.005 18:13:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.005 18:13:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:03.005 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.302 Initializing NVMe Controllers 00:36:06.302 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:06.302 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:06.302 Initialization complete. Launching workers. 00:36:06.302 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 115577, failed: 0 00:36:06.302 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28990, failed to submit 86587 00:36:06.302 success 0, unsuccess 28990, failed 0 00:36:06.302 18:13:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:06.302 18:13:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:36:06.302 EAL: No free 2048 kB hugepages reported on node 1 00:36:08.837 Initializing NVMe Controllers 00:36:08.837 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:36:08.837 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:36:08.837 Initialization complete. Launching workers. 00:36:08.837 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 112043, failed: 0 00:36:08.837 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 28014, failed to submit 84029 00:36:08.837 success 0, unsuccess 28014, failed 0 00:36:08.837 18:13:13 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:36:08.837 18:13:13 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:36:08.837 18:13:13 -- nvmf/common.sh@677 -- # echo 0 00:36:08.837 18:13:13 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:36:08.837 18:13:13 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:36:09.096 18:13:13 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:09.096 18:13:13 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:36:09.096 18:13:13 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:36:09.096 18:13:13 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:36:09.096 00:36:09.096 real 0m14.623s 00:36:09.096 user 0m8.378s 00:36:09.096 sys 0m3.574s 00:36:09.096 18:13:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.096 18:13:13 -- common/autotest_common.sh@10 -- # set +x 00:36:09.096 ************************************ 00:36:09.096 END TEST kernel_target_abort 00:36:09.096 ************************************ 00:36:09.096 18:13:13 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:36:09.096 18:13:13 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:36:09.096 18:13:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:36:09.096 18:13:13 -- nvmf/common.sh@116 -- # sync 00:36:09.096 18:13:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:36:09.096 18:13:13 -- nvmf/common.sh@119 -- # set +e 00:36:09.096 18:13:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:36:09.096 18:13:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:36:09.096 rmmod nvme_tcp 00:36:09.096 rmmod nvme_fabrics 00:36:09.096 rmmod nvme_keyring 00:36:09.096 18:13:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:36:09.096 18:13:13 -- nvmf/common.sh@123 -- # set -e 00:36:09.096 18:13:13 -- nvmf/common.sh@124 -- # return 0 00:36:09.096 18:13:13 -- nvmf/common.sh@477 -- # '[' -n 1926705 ']' 00:36:09.096 18:13:13 -- nvmf/common.sh@478 -- # killprocess 1926705 00:36:09.096 18:13:13 -- common/autotest_common.sh@926 -- # '[' -z 1926705 ']' 00:36:09.096 18:13:13 -- common/autotest_common.sh@930 -- # kill -0 1926705 00:36:09.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1926705) - No such process 00:36:09.096 18:13:13 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1926705 is not found' 00:36:09.096 Process with pid 1926705 is not found 00:36:09.096 18:13:13 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:36:09.096 18:13:13 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:13.294 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:36:13.294 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:13.294 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:13.294 18:13:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:36:13.294 18:13:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:36:13.294 18:13:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:13.294 18:13:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:36:13.294 18:13:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.294 18:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.294 18:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.198 18:13:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:36:15.198 00:36:15.198 real 0m50.974s 00:36:15.198 user 1m14.954s 00:36:15.198 sys 0m16.478s 00:36:15.198 18:13:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:15.198 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:36:15.198 ************************************ 00:36:15.198 END TEST nvmf_abort_qd_sizes 00:36:15.198 ************************************ 00:36:15.198 18:13:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:15.198 18:13:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:15.198 18:13:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:15.198 18:13:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:15.198 18:13:19 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:36:15.198 18:13:19 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:36:15.198 18:13:19 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:36:15.198 18:13:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:36:15.198 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:36:15.198 18:13:19 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:36:15.198 18:13:19 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:36:15.198 18:13:19 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:36:15.198 18:13:19 -- common/autotest_common.sh@10 -- # set +x 00:36:21.774 INFO: APP EXITING 00:36:21.774 INFO: killing all VMs 00:36:21.774 INFO: killing vhost app 00:36:21.774 INFO: EXIT DONE 00:36:25.973 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:65:00.0 (8086 0a54): Already using the nvme driver 00:36:25.973 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:36:25.973 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:36:30.171 Cleaning 00:36:30.171 Removing: /var/run/dpdk/spdk0/config 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:30.171 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:30.171 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:30.171 Removing: /var/run/dpdk/spdk1/config 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:30.171 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:30.171 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:30.171 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:30.171 Removing: /var/run/dpdk/spdk2/config 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:30.171 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:30.171 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:30.171 Removing: /var/run/dpdk/spdk3/config 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:30.171 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:30.171 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:30.171 Removing: /var/run/dpdk/spdk4/config 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:30.171 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:30.171 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:30.171 Removing: /dev/shm/bdev_svc_trace.1 00:36:30.171 Removing: /dev/shm/nvmf_trace.0 00:36:30.171 Removing: /dev/shm/spdk_tgt_trace.pid1480285 00:36:30.171 Removing: /var/run/dpdk/spdk0 00:36:30.171 Removing: /var/run/dpdk/spdk1 00:36:30.171 Removing: /var/run/dpdk/spdk2 00:36:30.171 Removing: /var/run/dpdk/spdk3 00:36:30.171 Removing: /var/run/dpdk/spdk4 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1476874 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1478142 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1480285 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1480940 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1482619 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1483975 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1484324 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1484682 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1484962 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1485147 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1485451 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1485769 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1485915 00:36:30.171 Removing: /var/run/dpdk/spdk_pid1487116 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1490126 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1490463 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1490794 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1490831 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1491217 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1491458 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1491807 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1491981 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1492158 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1492464 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1492513 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1492809 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1493214 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1493464 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1493627 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1493950 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1493988 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1494194 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1494356 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1494656 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1494958 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1495053 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1495303 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1495622 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1495795 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1495981 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1496269 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1496586 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1496672 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1496926 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1497233 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1497521 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1497595 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1497893 00:36:30.429 Removing: /var/run/dpdk/spdk_pid1498195 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1498310 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1498540 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1498859 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1499120 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1499220 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1499506 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1499831 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1500009 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1500184 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1500469 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1500795 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1500916 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1501131 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1501441 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1501761 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1501830 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1502109 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1502417 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1502727 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1502783 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1503085 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1503387 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1503656 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1503774 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1504143 00:36:30.430 Removing: /var/run/dpdk/spdk_pid1508790 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1598901 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1604155 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1615292 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1621518 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1626571 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1627195 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1637625 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1637953 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1643148 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1650748 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1653591 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1665849 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1676733 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1678312 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1679301 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1700006 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1704604 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1710089 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1711765 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1713717 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1713809 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1714116 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1714295 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1714825 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1716819 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1717831 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1718316 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1724826 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1731400 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1736412 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1777988 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1782609 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1789629 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1790994 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1792411 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1797647 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1802797 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1812207 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1812231 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1817940 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1818027 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1818311 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1818629 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1818734 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1820800 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1822597 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1824180 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1826007 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1827691 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1829953 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1836997 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1837731 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1840055 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1841141 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1849198 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1851979 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1858492 00:36:30.688 Removing: /var/run/dpdk/spdk_pid1865234 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1872125 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1872776 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1873393 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1874451 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1875162 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1875887 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1876576 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1877312 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1882495 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1882796 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1889791 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1889878 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1892160 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1901096 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1901179 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1907811 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1909788 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1911843 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1912933 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1915081 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1916657 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1927361 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1927948 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1928565 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1931246 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1931799 00:36:30.947 Removing: /var/run/dpdk/spdk_pid1932342 00:36:30.947 Clean 00:36:30.947 killing process with pid 1420350 00:36:43.180 killing process with pid 1420347 00:36:43.180 killing process with pid 1420349 00:36:43.180 killing process with pid 1420348 00:36:43.180 18:13:45 -- common/autotest_common.sh@1436 -- # return 0 00:36:43.180 18:13:45 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:36:43.180 18:13:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:43.180 18:13:45 -- common/autotest_common.sh@10 -- # set +x 00:36:43.180 18:13:45 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:36:43.180 18:13:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:36:43.180 18:13:45 -- common/autotest_common.sh@10 -- # set +x 00:36:43.180 18:13:45 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:43.180 18:13:45 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:43.180 18:13:45 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:43.180 18:13:45 -- spdk/autotest.sh@394 -- # hash lcov 00:36:43.180 18:13:45 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:43.180 18:13:45 -- spdk/autotest.sh@396 -- # hostname 00:36:43.180 18:13:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-CYP-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:43.180 geninfo: WARNING: invalid characters removed from testname! 00:37:05.198 18:14:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:05.198 18:14:09 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:07.176 18:14:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:09.082 18:14:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:10.988 18:14:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:12.895 18:14:17 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:14.803 18:14:18 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:14.803 18:14:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:14.803 18:14:19 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:14.803 18:14:19 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:14.803 18:14:19 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:14.803 18:14:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.803 18:14:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.803 18:14:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.803 18:14:19 -- paths/export.sh@5 -- $ export PATH 00:37:14.803 18:14:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:14.803 18:14:19 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:14.803 18:14:19 -- common/autobuild_common.sh@438 -- $ date +%s 00:37:15.063 18:14:19 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721664859.XXXXXX 00:37:15.063 18:14:19 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721664859.lEKNyS 00:37:15.063 18:14:19 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:37:15.063 18:14:19 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:37:15.063 18:14:19 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:37:15.063 18:14:19 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:15.063 18:14:19 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:15.063 18:14:19 -- common/autobuild_common.sh@454 -- $ get_config_params 00:37:15.063 18:14:19 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:37:15.063 18:14:19 -- common/autotest_common.sh@10 -- $ set +x 00:37:15.063 18:14:19 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:37:15.063 18:14:19 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:37:15.063 18:14:19 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:15.063 18:14:19 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:15.063 18:14:19 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:15.063 18:14:19 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:15.063 18:14:19 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:15.063 18:14:19 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:15.063 18:14:19 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:15.063 18:14:19 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:15.063 18:14:19 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:15.063 + [[ -n 1377469 ]] 00:37:15.063 + sudo kill 1377469 00:37:15.073 [Pipeline] } 00:37:15.092 [Pipeline] // stage 00:37:15.098 [Pipeline] } 00:37:15.118 [Pipeline] // timeout 00:37:15.124 [Pipeline] } 00:37:15.142 [Pipeline] // catchError 00:37:15.147 [Pipeline] } 00:37:15.166 [Pipeline] // wrap 00:37:15.172 [Pipeline] } 00:37:15.189 [Pipeline] // catchError 00:37:15.199 [Pipeline] stage 00:37:15.201 [Pipeline] { (Epilogue) 00:37:15.215 [Pipeline] catchError 00:37:15.217 [Pipeline] { 00:37:15.232 [Pipeline] echo 00:37:15.234 Cleanup processes 00:37:15.240 [Pipeline] sh 00:37:15.527 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:15.527 1948137 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:15.542 [Pipeline] sh 00:37:15.825 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:15.825 ++ grep -v 'sudo pgrep' 00:37:15.825 ++ awk '{print $1}' 00:37:15.825 + sudo kill -9 00:37:15.825 + true 00:37:15.838 [Pipeline] sh 00:37:16.123 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:31.030 [Pipeline] sh 00:37:31.314 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:31.314 Artifacts sizes are good 00:37:31.328 [Pipeline] archiveArtifacts 00:37:31.335 Archiving artifacts 00:37:31.537 [Pipeline] sh 00:37:31.851 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:31.866 [Pipeline] cleanWs 00:37:31.876 [WS-CLEANUP] Deleting project workspace... 00:37:31.876 [WS-CLEANUP] Deferred wipeout is used... 00:37:31.883 [WS-CLEANUP] done 00:37:31.885 [Pipeline] } 00:37:31.906 [Pipeline] // catchError 00:37:31.918 [Pipeline] sh 00:37:32.203 + logger -p user.info -t JENKINS-CI 00:37:32.214 [Pipeline] } 00:37:32.230 [Pipeline] // stage 00:37:32.237 [Pipeline] } 00:37:32.257 [Pipeline] // node 00:37:32.263 [Pipeline] End of Pipeline 00:37:32.299 Finished: SUCCESS